This has not been our experience at all.
Trevor Livingston

Okay, but if you take 300 million requests on Node servers than your bill would be considerably massive and insane for absolute no reason. Given that scaling Node is an expensive pain — to increase the pool of simultaneous requests you need to keep adding CPUs to the clusters. That, and the fact that Node has a hard limit on memory which is 1.6 GB, also forces constant hardware upgrades in a such single-threaded main thread implementation. You could save up a lot of money by moving on to something much more solid, multi-threaded and mature — that scales better.

There’s also all kinds of wrong things with cluster module load balancing through Node model of cooperative scheduling — that is, when one CPU fails or freezes or whatever, this resource will keep being occupied and in result busy (unavailable). Its been such an archaic model that even NASA in the 60s had surpassed it without ease in their attempt to fly to the moon where they implemented pre-emptive scheduling for the critical life support systems vs deallocating resources from the radar (just example of how it worked).

That’d be fine if Node processes were cheap, then you could keep running loads of them and fail one, start another etc, but they are heavyweight system level processes and all new Node runtime instances.

Another terrible thing is that domain level exceptions are hideous to catch and that feature is being deprecated without real replacement / alternative (!). In other words, Node is still terrible at being fault-tolerant and debuggable platform that scales well to consider it for the sake of reliability.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.