Short Node.js Overview
by Nicklas EnvallNode.js is a runtime environment that lets you run JavaScript on a server. The low-level I/O engine written in C named libuv made Node.js compatible with major OS platforms. Node.js uses an event-driven and non-blocking I/O model. It is a viable option, though not the best choice for CPU-bound tasks. In this article, we'll cover:
- Non-blocking Node.js π§
- Event-driven Node.js π
- Node.js Modules π¦
- Node.js Streams π‘
- Scalable Node.js βοΈ
New to Node.js? Then, you'll need to install Node, and as a bonus, you can get nvm (Node Version Manager), which is a tool that makes it easier to install and use a different version of Node.
Non-blocking Node.js π§
I/O (input/output) requests are described as blocking if they must complete before giving back control to the application. In the context of Node.js, I/O mainly refers to interactions with networks and the system's disc. Because these I/O requests are often slow, the delay caused by blocking will hurt our application's performance. Let's consider three flaws of I/O:
- Accessing data: on RAM takes nanoseconds. With disc or network, it takes milliseconds.
- Bandwidth: transfer rate for RAM is GB/s while for disc it's MB/s and, for networks, it often varies between MB/s and Mb/s.
- Unpredictability: the time it takes for users to give input varies.
Languages famous for building network applications, like Java, mitigates I/O blocking with multithreading. But, JavaScript is single-threaded and solves I/O blocking with the Event Loop, and that's also its selling point.
const callback = () => console.log('Operation completed!'); setTimeout(() => callback(), 100);
The Event Loop follows the reactor pattern, which lets us handle I/O blocking operations without causing blocking by processing them "in the background" while continuing execution. These asynchronous operations have a handler (callback) attached to them that gets invoked once completed. With the help of closures, the callback remembers its original context.
Three Styles
There are three common styles for calling a function. The first is, Direct style, which entails calling a synchronous function:
const add = (a, b) => a + b;
Second is the Synchronous continuation-passing style (CPS). Its name contains "continuation" because the callback continues the execution of the function. It uses callbacks, but synchronously:
const add = (a, b, callback) => callback(a + b);
The third style is the Asynchronous continuation-passing style (CPS) which uses callbacks asynchronously:
const addAsync = (a, b, callback) => { setTimeout(() => callback(a + b), 100) };
Collecting Asynchronously with Callbacks
Collecting something in a sequence with callbacks asynchronously can be hard to wrap your head around. So, we will create a poorly designed API with needle
and nock
. With the API, we can request a list of user ids, that later on, is used to fetch each user, one by one. The goal is to have a list of users.
const nock = require('nock'); const needle = require('needle'); const API_URL = 'https://api.example.com'; nock(API_URL).get('/ids').reply(200, ['A1', 'B2', 'C3', 'D4']); nock(API_URL).get('/users/A1').reply(200, 'Charles'); nock(API_URL).get('/users/B2').reply(200, 'Jennifer'); nock(API_URL).get('/users/C3').reply(200, 'Kajsa'); nock(API_URL).get('/users/D4').reply(200, 'James'); const getUser = (id, callback) => { needle.get(`${API_URL}/users/${id}`, (error, res) => { if (error) { return callback(error); } callback(null, res.body.toString()); }); }; const getUsers = (callback) => { needle.get(`${API_URL}/ids`, (error, res) => { if (error) { return callback(error); }; const userIds = res.body; // collect users here.. }); }; // call get users getUsers((error, users) => { if (error) { return; } console.log(users); });
As we see, we have a getUser
function that asynchronously fetches a user. We also have a getUsers
function that should return an array of users, which we need to implement. The naΓ―ve approach would be to use a for loop with a "synchronous mindset." The example below will return an empty array before the asynchronous requests initiated by the forEach
gets completed.
const getUsers = (callback) => { needle.get(`${API_URL}/ids`, (error, res) => { if (error) { return callback(error); }; const userIds = res.body; const users = []; userIds.forEach(id => getUser(id, (error, user) => { if (error) return callback(error); users.push(user); })); callback(null, users); // [] }); };
A better approach would be to call the asynchronous operations in a sequence. With recursion, we can iterate one callback at a time, which allows us to push the returned user into users
before invoking the next asynchronous operation.
const getUsers = (callback) => { needle.get(`${API_URL}/ids`, (error, res) => { if (error) { return callback(error); }; const userIds = res.body; const users = []; const iterate = (index) => { // stop! we've iterated all ids if (index === userIds.length) { // return [ 'Charles', 'Jennifer', 'Kajsa', 'James' ] return callback(null, users); } const id = userIds[index]; getUser(id, (error, user) => { if (error) { return callback(error); } // store the retrieved user users.push(user); // re-do this process iterate(index + 1); }); } // init iteration iterate(0); }); };
We can take this one step further. If we think about it, each getUser
call isn't dependent on the prior. The execution order of these asynchronous tasks does not matter. So by running them in parallel, we can speed up the process.
But how are we supposed to collect them then?
Let's go back to the loop and use the state of users
to know when to pass the users to the callback.
const getUsers = (callback) => { needle.get(`${API_URL}/ids`, (error, res) => { if (error) { return callback(error); }; const userIds = res.body; const users = []; userIds.forEach(id => getUser(id, (error, user) => { if (error) return callback(error); users.push(user); // if this is true, it means this is the last! if (users.length === userIds.length) { return callback(null, users); } })); }); };
Too many concurrent task executions may lead to a resource drought that makes the system prone to Denial of Service (DoS) attacks. To avoid that, we can limit the allowed concurrent tasks with things like TaskQueues (also known as Job Queue). Lastly, the act of running tasks in parallel or sequence is generic can be abstracted into functions. The library async have done just that, amongst other patterns.
More tips:
Node.js API was initially built mostly with callbacks because promises hadn't yet become as common as they are today. I've written about asynchronous JavaScript that talks about promises. Here are some useful tips you can study on your own if you want:
- Dynamically building a chain of promises, executed in a sequence.
- The
async/await
keywords. - Asynchronous request batching.
- Asynchronous request caching.
- Using
setImmediate
for CPU heavy tasks to keep responsiveness
Event-driven Node.js π
Node.js utilizes the pub/sub pattern which the EventEmitter object has built-in. So the EventEmitter object can listen to events and register functions to the listeners. The registered functions get invoked once their corresponding event gets emitted.
But why should you care about the EventEmitter?
The reason is that the EventEmitter is used for event-driven programming in Node. The EventEmitter gets extended by other important objects, like streams, the http
module, and even express
. As you see below, the http.createServer
function returns an EventEmitter:
const server = require('http').createServer(); server.on('request', (request, response) => { // handle 'request' event });
It's important to know that it is a convention to emit an error
event when an error occurs. Because when working like this, throwing an exception is not an option. Yet, if Node.js sees an emitted error
event and cannot find an associated listener, it'll throw an exception.
So, when should you use invoke async functions, and when should you emit events? Use async functions when you want to get a result asynchronously. Emit events when you want to communicate that something has happened.
Node.js Modules π¦
A module is encapsulated code inside a file that can be imported by another file. Modules do not pollute the global scope because they have their own scope, thanks to the require
loader function. You can create and import modules, and importing a module can look like this when using the require
function:
const moduleA = require('./moduleA');
While exporting a module can look like this:
module.exports = {};
Node.js embraces small modules that focus on doing one thing well. Therefore, Node.js provides a core library called node-core
that does just that. It focuses on "core" functionalities like HTTP, files, streams, etc. It only implements what's deemed crucial. The so-called userland or userspace (the ecosystem) fills in what might be missing. In userland, developers contribute with new modules that later are available with the Node Package Manager (npm).
Importing and Exporting modules
Exporting a module is done with the exports
object, and what you add to it will be public:
// moduleA.js const secret = 'secret'; exports.getSecret = () => secret;
When importing in this case, the variable secret
will be private:
// moduleB.js const moduleA = require('./moduleA'); moduleA.getSecret(); // 'secret' moduleA.secret; // undefined
Because we have the module
object, we can reassign the exports
property. However, in doing so, one must be considered. In the example below, exports.hello
is discarded because module.exports
gets replaced with the greet
function:
// moduleA.js exports.hello = function hello () { console.log('hello'); } module.exports = function greet() { console.log('greet'); } // index.js const moduleA = require('./moduleA'); moduleA(); // greet moduleA.hello(); // throws error
We can reverse the order in which we add since a function is an object. A pattern called substack pattern is connected to this approach. You export only one function that contains the main functionality and then add more to the function if needed.
// moduleA.js module.exports = function greet() { console.log('greet'); } module.exports.hello = function hello () { console.log('hello'); } // index.js const moduleA = require('./moduleA'); moduleA(); // greet moduleA.hello(); // hello
Finding modules - Resolving algorithm
The require
loader function, has a resolving algorithm (require.resolve)
which consists of different branches:
require('module'); // Core module require('/module'); // Absolute path, file module require('./module'); // Relative path, file module
With core modules, it first looks in node-core
, and if nothing is found it'll look in node_modules
for package modules. Subsequently, this algorithm allows us to have thousands of packages without collisions or version compatibility issues. Yes, thousands of packages. Our packages themselves can have their own node_modules
. Imagine the following tree:
app/ - foo.js - node_modules/ - moduleA/ - index.js - moduleB/ - index.js - node_modules/ - moduleA/ - index.js
Calling require(moduleA)
from /app/foo.js
will load /app/node_modules/moduleA/index.js
. While calling require(moduleA)
from /app/node_modules/moduleB/index.js
will load /app/node_modules/moduleB/node_modules/moduleA/index.js
.
More on modules
I recommend looking up pseudo examples of how the require
loader works under the hood. Also, the require
loader caches modules, which is reachable with require.cache
. We may reassign require.cache
to clear the cache, but that's mostly occasionally done when testing.
- Hardcoded dependencies (stateful instances)
- Dependency injection
- Service locator
- Dependency injection container
Then it's great to know how to structure all your code with dependencies, so I recommend looking up the items in the list above.
Node.js Streams π‘
A stream is a continuous series of bits. The opposite of streaming is, buffering. Buffering entails collecting all sent chunks of data in a buffer before the receiver does something with the data. Thus, a buffer is a place in memory that temporarily holds data being sent/received between two parties. Consequently, buffering a file with a size of 1GB would take up a lot of memory. Also, by default V8 has a cap of circa 1.07 GB. Surely there must be a better way, and there is. With streams, we continuously receive and process chunks of data as they come. This process makes streams more memory efficient and faster.
There are four types of streams in Node.js. We have a stream core module that contains four types of abstract streams classes, and each class is an instance of EventEmitter.
stream.Readable
: represents a source of data to be consumed.stream.Writable
: represents a target to which data can be written to.stream.Duplex
: both writable and readable.stream.Transform
: a duplex stream that can transform data as its being read or written.
Node streams support two operating modes, binary mode and object mode. With binary mode, we stream chunks with strings and buffers. With object mode, we can stream pretty much any JavaScript object.
Readable stream
We have two modes in which we can receive data, paused and flowing.
Paused mode (pull)
Paused mode entails that we deliberately pull (read) data. The readable
event informs when new data is available and can be read, which we then do with stream.read()
.
const fs = require('fs'); const readableStream = fs.createReadStream('file.txt'); let data = ''; readableStream.on('readable', () => { let chunk; while ((chunk = readableStream.read()) !== null) { data += chunk; } });
Today we also have async iterators that uses the readable
event inside, which can make this look much cleaner:
const fs = require('fs'); const readableStream = fs.createReadStream('file.txt'); (async () => { let data = ''; for await (chunk of readableStream) { data += chunk; } console.log(data); })();
Flowing mode (push)
Flowing mode means that data is pushed to us when available. We listen to the data
event to be able to handle the chunks of data. An end
event gets emitted when all the data has been sent. Also, an error
event gets emitted if an error occurs.
const fs = require('fs'); const readableStream = fs.createReadStream('file.txt'); let data = ''; readableStream.on('data', (chunk) => { data += chunk; }); readableStream.on('end', () => console.log(data)); readableStream.on('error', console.error);
Writable stream
Writing is more straight forward than reading. We use a write
method to write data.
const fs = require('fs'); const readableStream = fs.createReadStream('input.txt'); const writableStream = fs.createWriteStream('output.txt'); readableStream.on('data', (chunk) => { writableStream.write(chunk); });
If data is written faster than what the receiver can consume, it'll cause a bottleneck. There's a mechanism called back-pressure to avoid sending too much data at once. The stream has a highWaterMark
property, which symbolizes the limit of the internal buffer size. Once the internal buffer exceeds the highWaterMark
, the writable.write()
will return false
. That signals that we should stop writing and wait for a drain
event to be emitted, from which we know we can start writing again.
const fs = require('fs'); const { once } = require('events'); (async () => { const readableStream = fs.createReadStream('input.txt'); const writableStream = fs.createWriteStream('output.txt'); let data = ''; for await (chunk of readableStream) { if (!writableStream.write(chunk)) { // 1 await once(writableStream, 'drain'); // 2 } } writableStream.end(); // 3 })();
- Write a chunk of data.
- Wait for
drain
event to be emitted - ifwrite
returned false. - End stream, i.e. stop writing.
Pipe & Pipeline
A duplex stream is both readable and writable. A transform stream is a type of duplex stream used to transform data. A popular package to create transform streams is through2.
Reading and writing are quite common, so Node.js gave us pipe
to make this process easier. With it, we can chain streams, which let us abide by the "one thing per module" philosophy. In the example below, we use a transform stream to gzip the chunks before writing to output.txt.gz.
const fs = require('fs'); const readStream = fs.createReadStream('input.txt'); const writeStream = fs.createWriteStream('output.txt.gz'); const gzip = require('zlib').createGzip(); readStream.pipe(gzip).pipe(writeStream);
There's plenty of piping patterns you can look up:
- Combined stream: hides an internal pipeline.
- Forking streams: have one source with multiple destinations.
- Merging streams: have multiple readable streams singly target one destination.
- Multiplexing and Demultiplexing
Today we have pipeline
, and it's advisable to use that instead because pipe
swallows errors.
const fs = require('fs'); const { pipeline } = require('stream'); pipeline( fs.createReadStream('output.txt.gz'), require('zlib').createGunzip(), fs.createWriteStream('decompressed.txt'), (err) => { if (err) { console.error('Pipeline failed.', err); } else { console.log('Pipeline succeeded.'); } } );
Scalable Node.js βοΈ
Concisely put, scalability is about splitting up parts of your system into services that can be cloned and managed by a load balancer, while systems integration is about rejoining it.
There are many ways to create a robust Node.js app that scales well, but let's use the Scale Cube. The Scale Cube describes scalability with three dimensions, X, Y, and Z. X is about cloning, Y is about decomposing, and Z is about data partitioning. It's often recommended to allocate more resources for X and Y than Z - so let's look at those two.
X. Cloning services
Cloning is about creating n amount of instances of the same application to handle 1/nth of the workload. One way to do that is to create a cluster. There are many options for creating a cluster. The Node.js API provides the cluster module, which we can use to create a main process that delegates work to worker processes. Clusters can increase the resiliency and availability of a system. Furthermore, it allows us to do zero-downtime restarts (update apps without affecting availability).
Here are some good to know keywords regarding this subject area:
-
Sharing state: imagine you have instances A and B. You add something to A on a request. Later, the subsequent request goes to B, which means that B and A won't match. So that means that you must consider sharing the state between instances/services in a cluster. Perhaps with an in-memory store like Redis or databases like MongoDB.
-
Sticky load balancing: instead of sharing state, you can use sticky load balancing. By storing an id of the user, the balancer can route subsequent requests to a particular server, ensuring the state is as it should be. However, this approach has downsides, such as if the server goes down, then the state will reset.
-
Reverse proxy: if you want to use different ports or machines rather than processes, then a Reverse proxy is more viable than Node's cluster module. One famous example is Nginx.
Y. Decomposing by functionality
Decomposing by functionality essentially entails splitting up your app into different services. Important to note that, with a monolithic architecture you can have separated services, but they're in the same codebase running on the same server. So, if one of those services throws an uncaught exception, the entire system can go down.
Instead, we can create microservices, which we then plug together to create an app. The microservices communicate with each other with things like HTTP. They're also independently deployable, which means we can allocate or deallocate resources to whatever service we want. We also achieve higher cohesion and more loose coupling. But this approach comes with new complexities because now we must focus on how to integrate every microservice. That's where things like API Proxies, API orchestration, and Message brokers come in and can help out. You can read more about microservices here.