How to Setup Replicas using MongoDB - Part 1

Have a Database Problem? Speak with an Expert for Free
Get Started >>

Introduction

In this article we’ll guide you through how to setup replicas with MongoDB on your local development environment. Today’s technical market is very demanding and losing data is not an option for any production application. If one of your database servers goes down you want multiple replicas with the same data to take over so there is no disruption in service.

Prerequisites

  • You should have MongoDB installed
  • It’s not required but it’s recommended that you have some previous command line experience with MongoDB primarily with how to start an instance.
  • You should have access to the MongoDB command line

Preface

Let’s preface with that we are working locally on a Mac. The commands we issue are typically made in a Demo directory that we have.

Goal

We will be creating a cluster with three instances of MongoDB where one of the instances will be primary and the other two will sync up and replicate the data.

Setup Replicas Locally using MongoDB

Setup directories for each instance

First we create three different directories for the three different instances in our cluster.

1
2
3
$ mkdir -p data/node1
$ mkdir -p data/node2
$ mkdir -p data/node3

The mkdir (make directory) command with the -p flag will create the nested directories that we want.

We can see our current directory structure using the tree command:

1
2
3
4
5
6
$ tree .
.
└── data
    ├── node1
    ├── node2
    └── node3

( If you don’t have tree installed you can just do this with ls and cd )

Then we create a directory for each of their log files to reside:

1
$ mkdir logs

And we create the log files:

1
2
3
4
$ cd logs
$ touch node1.log
$ touch node2.log
$ touch node3.log

Now we have:

1
2
3
4
5
6
7
8
9
$ tree .
.
├── data
│   ├── node1
│   ├── node2
│   └── node3
└── logs

5 directories, 0 files

Start three instances

Now we start the three instances with the these commands:

1
2
3
$ mongod --replSet demo --dbpath data/node1 --logpath logs/node1.log --port 27000 --fork
$ mongod --replSet demo --dbpath data/node2 --logpath logs/node2.log --port 27001 --fork
$ mongod --replSet demo --dbpath data/node3 --logpath logs/node3.log --port 27002 --fork

We are running these commands from our Demo directory inside which reside our data and logs directories.

As you can see we specify the flag --dbpath to specify where we want to store the data and the --logpath flag to specify where we want the log file to live for each instance. We used the --port flag to put each instance on a different port. The --fork allows us to stare multiple instances in one shell. The --replSet flag sets the replica set name which we simply called demo.

Replica Configuration

Now we setup the replica configuration with rs.initiate() function and providing it with a json object of the settings we want. For _id we give it the same name that we used for the --replSet earlier which we used as demo. We also give it the members of the replica set.

You’ll need to execute this command from the mongo shell so you can log into the shell of the first instance with:

1
mongo localhost:27000

Then inside the shell you can define an object with your settings and hit return:

1
2
3
4
5
6
7
8
replicaConfig = {
   '_id':'demo',
   'members':[
      {'_id':0, 'host': 'localhost:27000'},
      {'_id':1, 'host': 'localhost:27001'},
      {'_id':2, 'host': 'localhost:27002'}
   ]
}

Then finally we setup the replica set with the rs.initiate() function passing into it our config as a parameter:

1
rs.initiate(replicaConfig)

You should receive a response similar to below:

1
2
3
4
5
6
7
8
9
10
11
12
anonymous:demoDatabase >rs.initiate(replicaConfig)
{
        "ok" : 1,
        "operationTime" : Timestamp(1559576327, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1559576327, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

At this point it’s good to run the rs.status() function to get the status of our replica set and see which instance is our primary and which are our secondaries.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
anonymous:demoDatabase >rs.status()
{
        "set" : "demo",
        "date" : ISODate("2019-06-03T15:40:41.032Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "syncingTo" : "",
        "syncSourceHost" : "",
        "syncSourceId" : -1,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1559576439, 1),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1559576439, 1),
                        "t" : NumberLong(1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1559576439, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1559576439, 1),
                        "t" : NumberLong(1)
                }
        },
        "lastStableCheckpointTimestamp" : Timestamp(1559576399, 1),
        "members" : [
                {
                        "_id" : 0,
                        "name" : "localhost:27000",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 1119,
                        "optime" : {
                                "ts" : Timestamp(1559576439, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-06-03T15:40:39Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "could not find member to sync from",
                        "electionTime" : Timestamp(1559576337, 1),
                        "electionDate" : ISODate("2019-06-03T15:38:57Z"),
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 1,
                        "name" : "localhost:27001",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 113,
                        "optime" : {
                                "ts" : Timestamp(1559576439, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1559576439, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-06-03T15:40:39Z"),
                        "optimeDurableDate" : ISODate("2019-06-03T15:40:39Z"),
                        "lastHeartbeat" : ISODate("2019-06-03T15:40:39.981Z"),
                        "lastHeartbeatRecv" : ISODate("2019-06-03T15:40:40.422Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "localhost:27000",
                        "syncSourceHost" : "localhost:27000",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "localhost:27002",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 113,
                        "optime" : {
                                "ts" : Timestamp(1559576439, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1559576439, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-06-03T15:40:39Z"),
                        "optimeDurableDate" : ISODate("2019-06-03T15:40:39Z"),
                        "lastHeartbeat" : ISODate("2019-06-03T15:40:39.981Z"),
                        "lastHeartbeatRecv" : ISODate("2019-06-03T15:40:40.423Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "localhost:27000",
                        "syncSourceHost" : "localhost:27000",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "operationTime" : Timestamp(1559576439, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1559576439, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

This is a lot of information so we made a summary:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
anonymous:demoDatabase >rs.status()
{
        "set" : "demo",
        "members" : [
                {
                        "_id" : 0,
                        "name" : "localhost:27000",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                },
                {
                        "_id" : 1,
                        "name" : "localhost:27001",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                },
                {
                        "_id" : 2,
                        "name" : "localhost:27002",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                }
        ],
        "ok" : 1,
}

As you can see the instance on port 27000 is our PRIMARY, 27001 is a SECONDARY, and 27002 is a SECONDARY. It recognizes them as a set so it looks like we have successfully setup a replication set. Of course we want to verify that data is being replicated …

Continue to Part 2 to Verify the Replication

We’ve done all the hard work and have successfully made a replication set in MongoDB. In Part 2 of this tutorial we’ll show you how to add data to the Primary instance and verify that the data is in fact replicated in the Secondary instance. We hope you’ll stick-around for Part 2.

How to Setup Replicas using MongoDB Part 1

How to Setup Replicas using MongoDB Part 2

Pilot the ObjectRocket Platform Free!

Try Fully-Managed CockroachDB, Elasticsearch, MongoDB, PostgreSQL (Beta) or Redis.

Get Started

Keep in the know!

Subscribe to our emails and we’ll let you know what’s going on at ObjectRocket. We hate spam and make it easy to unsubscribe.