Setting up a NodeJS service for production

Posted on: Nov 16, 2021 Written by DKP

This post has been written in collaboration with, a portal for interview preparation.

Table of contents#


NodeJS is a server side programming framework using JavaScript. Using NodeJS frameworks like Express, you can create backend services quickly and wire them up with the frontend, all in Javascript.

We’ll be using a Node-Express application built along the lines of the Zomock application, with a MongoDB database. You’ll get a basic understanding of a Node-Express application, and some things you need to consider while building for production. You’ll than setup a remote server using AWS EC2, similar to how you’d done in the React tutorial. You’ll than set up MongoDB using MongoDB’s cloud offering called Atlas, and connect your Node-Express app to MongoDB. Finally you’ll run your service using PM2 to keep the application running even after you’ve closed down the SSH connection. We conclude with some additional steps you can yourself choose to add to your project, and finally, leave you with some references for further information.

Let’s jump in


You’re expected to have a basic understanding of what Node and Express are and how to write simple NodeJS code to start a server. Here is a sample tutorial in case you’re entirely new. You should have a basic idea of Postman, which we’ll be using to check if our service is working as expected.

Introduction to Node, Express and MongoDB#

NodeJS(or simply Node) is a JavaScript runtime, and a server side framework, meaning that it allows you to create server side applications, and provides you the environment to run them in. Express is a framework based on NodeJS to help you create the endpoints for the application.

MongoDB is a NoSQL database that stores data in the form of documents and collections. It is NoSQL, since it doesn't have tables and doesn't enforce a fixed schema across all documents.

In case you need further brushing up on any of these, take a look at the links in the last section of the tutorial. Note that while we'll not be focusing on the development aspect, and instead will be looking at the deployment, you're still expected to know the basics to be able to understand some of the concepts we'll be using.

Introduction to the application we'll be using#

We'll be using a simple mock Zomato API express application for the tutorial. This API exposes an endpoint to return a list of restaurants with details like rating, cost. You can also add restaurants by making a POST request. The application uses Node and Express for the logic, and MongoDB as a database, which we'll be setting up from scratch in the coming sections.

Introduction to AWS hosting services and EC2#

AWS isn’t something you’re new to, or you won’t be reading this tutorial, but a one liner for it is that it’s a cloud hosting solutions provider by Amazon that allows you to host, manage and scale applications. For the sake of this tutorial, AWS will provide you the remote server where your React app will eventually run. The server itself will be located in some Amazon Data center, but you’d be able to access it remotely from your PC via a set of commands. We’ll be using the EC2 service of AWS. EC2 stands for Elastic Compute Cloud, and it does what we described above - lets you access a remote server and use it to host applications

Setting up an AWS EC2 instance#

Next, let’s set up a remote EC2 server instance. As said before, you’ll need an AWS account for the same. If you don’t already have one, you’d need to create it. Remember, it’ll ask you for debit/credit card credentials, but as long as you follow the steps in this tutorial, you will not get charged for it.

To set up an AWS account, go to and follow the steps to set up an account. You’ll get a confirmatory mail once your account is set up and ready.

Once you login to the account, you should see a screen similar to this

Click on the blue ‘Launch a virtual machine’ line, and you’ll be taken to the EC2 setup screen, wherein you’d have to select an AMI, an Amazon Machine Image.

An AMI describes the configuration of the server you’d be using to host your application, including the OS configuration - Linux, Ubuntu, Windows etc. If you have been following tech news, a Mac version was also released for the first time in early 2021.

We’ll be going with Ubuntu server 20.04. You may choose another, but the rest of the steps might vary slightly. Also, do NOT choose an option that doesn’t have the ‘Free tier eligible’ tag, otherwise, you’ll be having to sell off some jewellery to pay the AWS bill.

The next step is choosing an instance type. This describes the server configuration, including CPU, memory, storage, and so on.

Here, we’ll pick the t2.micro instance types, which is the only one available in the free tier. You’ll need larger ones as your application size and requirements in RAM or processing speed increase. In case you’re not clear with any of the column fields, click the information icon next to the headings to get a description of what it means.

Once this is done, click on Next: Configure Instance Details

Here, you’re asked the number of server instances you wish to create and some properties regarding them. We only need one server instance. The rest of the properties are auto filled based on the configuration we selected in earlier steps and/or default values, and thus, should be kept as it is.

Next, click on Add storage

As the name suggests, storage refers to the amount of storage in our server. Note that this isn’t the storage you’d consider for storing databases. This is temporary storage that will last only as long as the instance lasts, and thus, can be used for things like caching. A size of 8GB, that’s part of the free tier, and is the default, suffices our purpose.

Next, we’d be adding a tag for our instance. It is a key:value pair that describes an instance. Since we only have a single instance right now, it is not very useful, but when you are working with multiple instances and instance volumes, as will be the case when the application scales, it is used to group, sort and manage these instances.

Next, we’ll be adding a security group to our instance. A SG is practically a firewall for your instance, restricting the traffic that can come in, what ports it can access, called inbound, and the traffic that can go out, called outbound. There’s further options to restrict the traffic based on IP. For instance, your application will run on port 3000, and thus, that’s a port you’d want all your users to be able to access. Compare that to a Postgres database service running on port 5432. You don’t want anyone else but you meddling with that, so you’ll restrict the IP of that port to only you.

Create a new security group. Next, we have to add the rules for the group, describing what ports are accessible to the outside world, and who they are accessible to. Note that outbound traffic has no restrictions by default, meaning that your application can send a request to anywhere without any restriction from the SG unless you choose to restrict it. As for inbound, we’ll first add HTTP on port 80 and HTTPS on port 443. Next, we’ll add an SSH rule for port 22. SSH stands for Secure Socket Shell and will allow you to connect to your instance, as we’ll soon see in the coming section. Finally, we’ll add a custom TCP rule for the port our application is going to expose - port 3000.

For simplicity, we’ll keep the sources of all of those at ‘anywhere’. Ideally, SSH should be limited only to those you want to allow to connect to your instance, but for the sake of the tutorial, we’ll keep it at anywhere.

Once the rules are set, click on Review and Launch. You’ll be shown the configurations you’ve selected to ensure you didn’t make a mistake anywhere. Once you hit launch, you’ll be asked to create/select a key pair. As the name suggests, it’s a pair of keys - one held by AWS, and the other by you, that acts as a sort of password for you to connect to your instance. Anyone wishing to SSH into this instance must have access to this key file or they won’t be able to.

The content of the file is RSA encrypted, which uniquely determines your access to the instance. Click on create new, give it a name(that you must remember), and download it.

It’s recommended that you download the .pem key file to C:/Users/Home directory on Windows( /home/usr or similar for Linux and Mac), to avoid any access issues.

Once the file is downloaded, you’ll get a prompt that your instance is starting, and after a few minutes, your instance will be started. Your EC2 home page should look like this. Note the Name : Main(tag), the Instance type t2.micro that we selected when we were setting up the instance.

Next, select the instance, and click on Connect on the top bar. It’ll open this page :

This lists a few ways in which you can connect to the instance. Go to the SSH client tab. Now, we’ll be using the terminal to connect to your instance(remote server). For that, open a new terminal as administrator(superuser or sudo for linux), and navigate to the directory where you stored the .pem key file.

First, we’ll run the chmod 400 keyfilename.pem command to allow read permission on that file, and remove all other permissions. Note that if the key file gets overwritten, you’ll lose SSH access to that instance forever, and you’ll have to recreate the instance, since you won’t get the .pem file to download again.

And once you’re done with that, it’s time for the high jump - connecting via a simple command to a remote computer thousands of miles away. The command to run will be on the AWS page as shown above - the ssh -i one

It means that we’re ssh-ing into the instance defined by the DNS(the thing), and proof that we’re authorized to do it, is in the pem file.

It’ll ask a confirmation prompt that you have to type yes to, and if all works well, you should see a welcome to Ubuntu text as shown above, which means that you’re now logged into the instance.

Great going.

Now, our next step is to bring the code into our instance and run it. To do that, we'll clone the repo we're working with, using

git clone

Once it's complete, go to the installed folder using

cd zomock

We'll have to create an additional .env file in the repo. What is this file for? Our app has some configurations and credentials that we'd rather keep secret. This includes things like database passwords, connection urls and so on. Thus, we need a file where we can store this, and NOT commit this file to version control. The .env file is the accepted standard.

In our case, we'll be storing two things - one, the PORT number of our application and two, the connection URL to our MongoDB database, which includes a database username and password. For now, we'll start with just the port number, and add the database URL once we set up the database in the next section. To create the env file, type

nano .env

This will open the env file in the Nano text editor.

Add the following line in there :


To save the file, press Ctrl + X. You'll be prompted if you want to save the changes. Enter Y, and the file will be saved and you'll go back to the CLI.

The next step is to install the dependencies.

npm install

Did you get an error? Ofcourse you did. You need to install NPM on the instance. How do you do that? The answer’s in the error itself :

sudo apt install npm

If you get an error like this, use the command sudo apt get update and then rerun the above command

This will take a few minutes to complete. Once it’s done, try running npm install again, and you’ll see that this time, you’re able to.

In case you see an error like this now, or anytime throughout this project, add a sudo before any command you run(for eg, sudo npm install)

Now, start the application using

npm run start

You should see a line saying Server running on port 5000

Are we done? Not quite. We still haven't set up the database, and thus, we wouldn't be able to do anything at all with the service. Let's resolve that in the next section.

Setting up MongoDB#

We'll be using the MongoDB cloud service called Atlas to create the database that our Node-Express service will be interacting with. One of the great advantages of MongoDB is this cloud service, that you can setup, configure and maintain without having to install anything at all anywhere, something that's not found in existing relational DB systems like Postgres or MySQL.

MongoDB has a free tier option, and that's what we'll be using. Remember, you'll not be prompted to add your billing details anywhere. If you are, that means you did a step wrong.

To get started, go to, and log in/create an account. Follow through the steps to set up your account.

Then, you'll be asked to select a cluster type. Select the free version as shown

Next, you'll be asked to customize your cluster details like hosting zone. Leave everything unchanged, and ensuring that there's no total cost at the bottom, select Create.

It'll take a minute or two for your cluster to get created. Once it's ready, you should see a screen like this.

Carefully take a look at the various details being shown, such as the R W graph - R and W stand for Reads and Writes respectively, which is an important metric for determining the traffic to your DB.

The connections graph shows the number of connections to your DB. A connection is either via an application, as we'll do, or via the command line, and for practical purposes, represents the number of folks modifying/viewing our database.

The in/out graph shows the bytes transferred to/from the database every second.

Data size is the size of the database.

Now, to establish a connection to the database, we need to do a few things first.

Click on connect next to the cluster name, and you'll be prompted to add a connection IP address. This will clarify what all traffic do we want to allow to connect to the database. Remember, in a production application, you dare not give direct database access to anyone and everyone, or you might end losing/leaking thousands of users' data. However, for ease of access, we'll start with the 'Allow access from anywhere' option, since we'll be trying to connect via an EC2 instance, which has a dynamic IP, and thus, you'd have to keep updating the rules every now and then.

Click on Add IP Address

Next, you have to create a database user. You can create any username and password(make sure you remember it).

Next, you'll be asked to choose a connection method - via shell(CLI), Compass(GUI) or via an application, which is the one we'll use. You'll then be asked to pick a driver version, and a connection string. Ensure that the driver is Node.JS and version is 4.0 and later. Copy the connection string.

Now, go to the .env file we'd created on our server instance. Add a line there (no extra spaces, or you might face unexpected errors) :


And replace the username and password with the user's credentials you had created.

Did you see why we did that? We wish to restrict access to the database, and thus, the connection string, which is used to connect to the database, will only be present in a secure local environment and will not be committed with the rest of the code.

With this, you'll finally have added the last requirement to your code. Now, we can run the application using

npm run start

Now, in addition to the 'Server running on port 5000', you should see an additional

'Connected to database' message as well.

If you don't, you need to recheck your connection string.

Testing the application done so far#

Now, we need to test if the application is actually working. Since it's a backend only service without a frontend, we'd need to use an API testing tool. We'll be going with postman.

Go to If it's your first time with Postman, there'll be some setup steps.

If we were developing this on our local laptops/PCs, we'd have used a localhost:5000 link. However, since it's on a remote server, we need to find the IP address of the server.

This IP can be found from the AWS instance details - Public IPV4 address.

Paste the IP into the request field on Postman. Add an http:// before the IP and a :5000 after.

Now, if you check the Readme of the repo, hitting the /restaurants endpoint should retrieve a list of restaurants present in the DB. Add a /restaurants after the :5000 and hit send.

If it works well, you should see an empty array [] in the response tab, since there's no data in the database yet. If you get an error like connection refused or request timed out, recheck the IP.

Now, let's try adding some data to the DB. Another look at the readme file will show that making a POST request to the endpoint /restaurants/add will create a restaurant. So update the endpoint, and add the following restaurant data in the body :

        "_id": "6073ccae8bab295faebb5718",
        "name": "Kiran Plaza",
        "rating": "5",
        "image": "",
        "cost": "350",
        "numOfReviews": "4380",
        "discount": "40%",
        "spec": "Chinese",
        "area": "Koramangala"

Now, rerun the get request, and you should see this restaurant being returned.

Setting up additional packages#

Great, so you got it all running on a server. But we’re not done. What happens if you close off the terminal. Try doing just that and see if your get requests still work.

As expected, they won’t. And that doesn’t make sense. For a server to stay up, you need not have to keep a dedicated computer with a terminal on all day - then there’s no point in holding a remote server.

Fortunately, there’s a simple npm package that can keep your service running even when your terminal isn’t. It’s called pm2(most likely short for process monitoring and management). Apart from ensuring that the server remains up, you can use it to check the status of all your node processes running at any time to figure out which of those are causing the issue, logs management, to track the application and see where errors/bugs/incidents if any, occur, and metrics such as memory consumed.

So, we’ll be installing the same on our server and then configuring it to start our node service. Again SSH into the instance using the ssh -i command, go to the project directory, and write

npm i -g pm2

Note the -g flag. It stands for global, meaning that pm2 will be installed as a global package, not just for our project. This is important, because pm2 is expected to handle the restarting of the application even if our project stops, and any project level dependency would not be able to do it.

Once that’s done, we need to start our service using pm2.

The command for that is

pm2 start zomock/index.js 5000 -i max --watch

-i max - allows us to run processes with the max number of threads available. Because NodeJS is single-threaded, using all available cores will maximize the performance of the app.

--watch - allows the app to automatically restart if there are any changes to the directory.

Note that the above command should be run in the root(outside of the zomock directory)

Now, if you close the terminal and make a GET request, you'll see that you're able to still get a response.

Note : Due to an issue with PM2, sometimes the production environment is unable to parse the MongoDB connection string correctly from the .env file. So, in case you get a connection refused issue when making a get request, declare the mongo_url as a const in index.js itself, and use the constant instead of the process.env.MONGO_URL and you should be good to go

Monitoring using pm2#

In production environments, we often need to monitor our deployed code for issues/crashes, so they can be resolved quickly. Fortunately, pm2 can help us with that as well.

Enter the command pm2 monitor on the terminal.

It'll prompt you to sign up for a pm2 account, and once you do, you'll get a URL which holds the metrics dashboard for your application

If you go to that URL in the browser, you'll be able to see metrics of your application like the requests being made, as well as issues and errors. This is extremely advantageous when working with a large number of users


Thus, in this tutorial, you learnt how to deploy a Node-Express based application onto an EC2 server you'd set up from scratch. You also set up a MongoDB database and connected it to your application. You then ensured that your application continues running even when you close off the terminal running the development process. Finally, you learnt some concepts of monitoring and set up monitoring for your application using PM2

Some of the most major challenges in backend development for production is to track errors and handle them gracefully. You should further research on how to handle exceptions, how to catch errors, log them and ensure that the user has a seamless experience.