In node.js backends, you also deal with a single thread only; if you want multiple CPUs, you'd need node-cluster, giving you, conceptually speaking, multiple shared-nothing single-thread environments to load-balance requests into. Technically, libuv (the C lib exposing async I/O to node) uses threads but that is hidden away from you. Multithreading in JS can't work anyway since JS doesn't have synchronization primitives, which is both a blessing (because it drastically simplifies the design space for the language ie no JVM-like happens-before constraints, atomic ops, and "synchronized" heisenbugs) and a curse (because most backend/business code doesn't benefit at all from async and its terrible debugging story, and you need to fork out into workers/isolates for even slightly CPU-heavy things).
Forget parallelism I even get race conditions in Typescript frequently because managing state is just hard. Changing state from multiple place became so hard that in one project I just used redux :p. And where redux was not helpful i use async-lock package. May be correct asynchronous programming is harder thing and is arcane knowledge. Not everybody is wizard.
Depends on your scheduling runtime. In Rust you can schedule everything to a single thread or to multiple threads. That could change the correctness of your asynchronous code or at least make certain bugs non-deterministic
"Modern" frontend JS is increasingly async. Yes, it's not true async, but it still has all the problems, but the benefit of not locking up the UI is worth the pain.
Even old frontend JS was async in that events could be triggered by the user at any time, and in any order, and xhr and image loads requests were async as well.
Yeah I know, but I think the gp is thinking of nasty debugging sessions that come from parallelism, not concurrency. Concurrency issues one can step through with a debugger on a single thread.