How to Setup Replicas using MongoDB - Part 1

Introduction

In this article we’ll guide you through how to setup replicas with MongoDB on your local development environment. Today’s technical market is very demanding and losing data is not an option for any production application. If one of your database servers goes down you want multiple replicas with the same data to take over so there is no disruption in service.

Prerequisites

  • You should have MongoDB installed
  • It’s not required but it’s recommended that you have some previous command line experience with MongoDB primarily with how to start an instance.
  • You should have access to the MongoDB command line

Preface

Let’s preface with that we are working locally on a Mac. The commands we issue are typically made in a Demo directory that we have.

Goal

We will be creating a cluster with three instances of MongoDB where one of the instances will be primary and the other two will sync up and replicate the data.

Setup Replicas Locally using MongoDB

Setup directories for each instance

First we create three different directories for the three different instances in our cluster.

$ mkdir -p data/node1
$ mkdir -p data/node2
$ mkdir -p data/node3

The mkdir (make directory) command with the -p flag will create the nested directories that we want.

We can see our current directory structure using the tree command:

$ tree .
.
└── data
    ├── node1
    ├── node2
    └── node3

( If you don’t have tree installed you can just do this with ls and cd )

Then we create a directory for each of their log files to reside:

$ mkdir logs

And we create the log files:

$ cd logs
$ touch node1.log
$ touch node2.log
$ touch node3.log

Now we have:

$ tree .
.
├── data
│   ├── node1
│   ├── node2
│   └── node3
└── logs

5 directories, 0 files

Start three instances

Now we start the three instances with the these commands:

$ mongod --replSet demo --dbpath data/node1 --logpath logs/node1.log --port 27000 --fork
$ mongod --replSet demo --dbpath data/node2 --logpath logs/node2.log --port 27001 --fork
$ mongod --replSet demo --dbpath data/node3 --logpath logs/node3.log --port 27002 --fork

We are running these commands from our Demo directory inside which reside our data and logs directories.

As you can see we specify the flag --dbpath to specify where we want to store the data and the --logpath flag to specify where we want the log file to live for each instance. We used the --port flag to put each instance on a different port. The --fork allows us to stare multiple instances in one shell. The --replSet flag sets the replica set name which we simply called demo.

Replica Configuration

Now we setup the replica configuration with rs.initiate() function and providing it with a json object of the settings we want. For _id we give it the same name that we used for the --replSet earlier which we used as demo. We also give it the members of the replica set.

You’ll need to execute this command from the mongo shell so you can log into the shell of the first instance with:

mongo localhost:27000

Then inside the shell you can define an object with your settings and hit return:

replicaConfig = {
   '_id':'demo',
   'members':[
      {'_id':0, 'host': 'localhost:27000'},
      {'_id':1, 'host': 'localhost:27001'},
      {'_id':2, 'host': 'localhost:27002'}
   ]
}

Then finally we setup the replica set with the rs.initiate() function passing into it our config as a parameter:

rs.initiate(replicaConfig)

You should receive a response similar to below:

anonymous:demoDatabase >rs.initiate(replicaConfig)
{
        "ok" : 1,
        "operationTime" : Timestamp(1559576327, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1559576327, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

At this point it’s good to run the rs.status() function to get the status of our replica set and see which instance is our primary and which are our secondaries.

anonymous:demoDatabase >rs.status()
{
        "set" : "demo",
        "date" : ISODate("2019-06-03T15:40:41.032Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "syncingTo" : "",
        "syncSourceHost" : "",
        "syncSourceId" : -1,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1559576439, 1),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1559576439, 1),
                        "t" : NumberLong(1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1559576439, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1559576439, 1),
                        "t" : NumberLong(1)
                }
        },
        "lastStableCheckpointTimestamp" : Timestamp(1559576399, 1),
        "members" : [
                {
                        "_id" : 0,
                        "name" : "localhost:27000",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 1119,
                        "optime" : {
                                "ts" : Timestamp(1559576439, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-06-03T15:40:39Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "could not find member to sync from",
                        "electionTime" : Timestamp(1559576337, 1),
                        "electionDate" : ISODate("2019-06-03T15:38:57Z"),
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 1,
                        "name" : "localhost:27001",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 113,
                        "optime" : {
                                "ts" : Timestamp(1559576439, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1559576439, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-06-03T15:40:39Z"),
                        "optimeDurableDate" : ISODate("2019-06-03T15:40:39Z"),
                        "lastHeartbeat" : ISODate("2019-06-03T15:40:39.981Z"),
                        "lastHeartbeatRecv" : ISODate("2019-06-03T15:40:40.422Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "localhost:27000",
                        "syncSourceHost" : "localhost:27000",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "localhost:27002",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 113,
                        "optime" : {
                                "ts" : Timestamp(1559576439, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1559576439, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-06-03T15:40:39Z"),
                        "optimeDurableDate" : ISODate("2019-06-03T15:40:39Z"),
                        "lastHeartbeat" : ISODate("2019-06-03T15:40:39.981Z"),
                        "lastHeartbeatRecv" : ISODate("2019-06-03T15:40:40.423Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "localhost:27000",
                        "syncSourceHost" : "localhost:27000",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "operationTime" : Timestamp(1559576439, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1559576439, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

This is a lot of information so we made a summary:

anonymous:demoDatabase >rs.status()
{
        "set" : "demo",
        "members" : [
                {
                        "_id" : 0,
                        "name" : "localhost:27000",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                },
                {
                        "_id" : 1,
                        "name" : "localhost:27001",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                },
                {
                        "_id" : 2,
                        "name" : "localhost:27002",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                }
        ],
        "ok" : 1,
}

As you can see the instance on port 27000 is our PRIMARY, 27001 is a SECONDARY, and 27002 is a SECONDARY. It recognizes them as a set so it looks like we have successfully setup a replication set. Of course we want to verify that data is being replicated …

Continue to Part 2 to Verify the Replication

We’ve done all the hard work and have successfully made a replication set in MongoDB. In Part 2 of this tutorial we’ll show you how to add data to the Primary instance and verify that the data is in fact replicated in the Secondary instance. We hope you’ll stick-around for Part 2.

How to Setup Replicas using MongoDB Part 1

How to Setup Replicas using MongoDB Part 2

Pilot the ObjectRocket Platform Free!

Try Fully-Managed Redis,
MongoDB & Elasticsearch

Get Started

OR

Try CockroachDB
in Beta

Get Started

Keep in the know!

Subscribe to our emails and we’ll let you know what’s going on at ObjectRocket. We hate spam and make it easy to unsubscribe.