I have spent a good amount of time today trying to find the answers to my own questions, guided by some of the comments and other answers left for me here. I share my findings here in case others may consider them useful.
Event-Driven Design in JavaScript for Browsers
The decision to design JavaScript this way seems mostly related to the requirements of the DOM Event Architecture. In this specification we can find explicit requirements related to the implementation of events order and the event loop. The HTML5 specification goes even further, and define the terms explicitly and state specific requirements for the event loop implementation.
This must have certainly driven the design of the JavaScript execution engines in browsers. In this article Timing and Synchronization in JavaScript published by Opera we can clearly see that these requirements are the driving force behind the design of the Opera browser. Also in this another article from Mozilla, named Concurrency Model and Event Loop, we can find a clear explanation of the same event-driven design concepts as implemented by Mozilla (although the document seems outdated).
The use of an event loop to deal with this kind of applications is not new.
  Handling user input is the most complex aspect of interactive
  programming. An application may be sensitive to multiple input
  devices, such as mouse and keyboard, and may multiplex these among
  multiple input devices (e.g. different windows). Managing this
  many-to-many mapping is usually in the province of of User Interface
  Management Systems (UIMS) toolkits. Since most UIMS are implemented
  in sequential languages they must resort to various techniques to
  emulate the necessary concurrency. Typically this toolkits use an
  event-loop that monitors the stream of input events and maps the events to call-back functions (or event handlers) provided by the
  application programmer.
  - Jonh H. Reppy - Concurrent Programming in ML
The use of event loops is present in other famous UI toolkits like Java Swing and Winforms. In Java all UI work must be done within the EventDispatchThread whearas in Winforms all UI work must be done within the thread that created the Window object. So, even when these languages support true multithreading they still require all UI code to be run in a single thread of execution.
Douglas Crockford explains the history of the event loop in JavaScript in this great video called Loopage (worth watching).
Event-Driven Design in JavaScript for Node
Now, the decision of using an event-driven design for Node.js is a bit less evident. Crockford gives a good explanation in the video shared above. But also, in the book, The Past, Present and Future of JavaScript, its author Axel Rauschmayer says:
  2009—Node.js, JavaScript on the server. Node.js lets you implement
  servers that perform well under load. To do so, it uses event-driven
  non-blocking I/O and JavaScript (via V8). Node.js creator Ryan Dahl
  mentions the following reasons for choosing JavaScript:
  
  
  - “Because it’s bare and does not come with I/O APIs.” [Node.js can thus introduce its own non-blocking APIs.]
- “Web developers use it already.” [JavaScript is a widely known language, especially in a web context.]
- “DOM API is event-based. Everyone is already used to running without threads and on an event loop.” [Web developers are not scared of
  callbacks.]
So, it looks like Ryan Dahl, creator of Node.js, took into account the current design of JavaScript in browsers to decide which should be the implementation of his non-blocking, event-driven solution for Node.js.
The latest implementation of Node.js seems to use a library called libuv, designed for the implementation of this kind of applications. This library is a core part of the design of node. We can find the definition of event loops in its documentation. Evidently this plays an important role in the current implementation of Node.js.
About Other EcmaScript Compatible Engines
The EcmaScript specification does not provide requirements about how the concurrency needs to be handled in JavaScript. Therefore, this is decided by the implementation of the language. Other models of concurrency could easily be used without making the implementation incompatible with the standard.
The best two examples I found were the new Nashorn JavaScript Engine created for Oracle for the JDK8, and Rhino JavaScript Engine created by Mozilla. They both are EcmaScript compatible, and they both allow the creation of Java classes. Nothing in these engines requires the use of event-driven programming to deal with concurrency. These engines have access to the Java class library and since they run on top of the JVM they probably have access to other concurrency models offered in this platform.
Consider the following example take from JavaScript, The Definitive Guide to illustrate how to use Rhino JavaScript.
print(x); // Global print function prints to the console
version(170); // Tell Rhino we want JS 1.7 language features
load(filename,...); // Load and execute one or more files of JavaScript code
readFile(file); // Read a text file and return its contents as a string
readUrl(url); // Read the textual contents of a URL and return as a string
spawn(f); // Run f() or load and execute file f in a new thread
runCommand(cmd, // Run a system command with zero or more command-line args
[args...]);
quit() // Make Rhino exit
You can see a new thread can be spawned to run a JavaScript file in an independent thread of execution.
About Event-Driven Design, Multicores and True Concurrency
The best explanation I found on this subject comes from the book JavaScript The Definitive Guide. In this book, David Flanagan explains:
  One of the fundamental features of client-side JavaScript is that it
  is single-threaded: a browser will never run two event handlers at the
  same time, and it will never trigger a timer while an event handler is
  running, for example. Concurrent updates to application state or to
  the document are simply not possible, and client-side programmers do
  not need to think about, or even understand, concurrent programming. A
  corollary is that client-side JavaScript functions must not run too
  long: otherwise they will tie up the event loop and the web browser
  will become unresponsive to user input. This is the reason that Ajax
  APIs are always asynchronous and the reason that client-side
  JavaScript cannot have a simple, synchronous load() or require()
  function for loading JavaScript libraries.
  
  The Web Workers specification very carefully relaxes the
  single-threaded requirement for client-side JavaScript. The “workers”
  it defines are effectively parallel threads of execution. Web workers
  live in a self-contained execution environment, however, with no
  access to the Window or Document object and can communicate with the
  main thread only through asynchronous message passing. This means that
  concurrent modifications of the DOM are still not possible, but it
  also means that there is now a way to use synchronous APIs and write
  long-running functions that do not stall the event loop and hang the
  browser. Creating a new worker is not a heavyweight operation like
  opening a new browser window, but workers are not flyweight threads
  either, and it does not make sense to create new workers to perform
  trivial operations. Complex web applications may find it useful to
  create tens of workers, but it is unlikely that an application with
  hundreds or thousands of workers would be practical.
What About Node.js True Parallelism?
Node.js is a fast-evolving technology, and perhaps that's why it is difficult to find opinions that are up-to-date. But basically, since it follows the same event-driven model as the browsers do, it is impossible to simply program a piece of code and expect it will take advantage of our multiple cores in the server. Since Node.js is implemented using non-blocking technologies, we could assume that every time we do some form of I/O (i.e. read a file, send something through a socket, write to a database, etc.), under the hood, the node engine could be spawning multiple threads and maybe taking advantage of the cores, but our code would still be run serially.
These days, it looks like node.js clustering is the solution for this problem. There are also some libraries like Node Worker that seem to implement the Web Worker concept in node. These libraries basically let us spawn new independent processes within node.js. (Although I have not experimented with this yet).
What About Portability?
It looks like there is no way that, in terms of the concurrency models, we can guarantee that all these libraries will play nice in all environments. 
Although in the realm of browsers they all seem to work similarly, and since Node.js runs in an event loop, many things may still work, but there not guarantees that this should work in other engines. I guess this is probably one of the disadvantages of EcmaScript compared to other more extensive specifications like those defining the Java Virtual Machine or the CLR.
Perhaps something gets standardize later. In the future of EcmaScript, more concurrency ideas are being discussed today. See the EcmaSript Wiki: Strawman Proposals Communicating Event-Loop Concurrency and Distribution