Learn How to Secure MongoDB Community Version using Authentication Part 3
Introduction
This is part three of the tutorial “Learn How to Secure MongoDB Community Version using Authentication.” Part two of this series showed how to enable the SCRAM-SHA-1
authentication mechanism and the X.509
authentication mechanism. This section will demonstrate how to enable the remaining final two MongoDB authentication mechanisms of Keyfile-based replica set authentication
and X.509 internal authentication
. Be sure to have a good understanding of parts one and two before proceeding with this third section.
Prerequisite
MongoDB must be properly installed and configured before beginning.
Finished parts one and two of this tutorial series.
Possess a good working knowledge of TLS/SSL and access to a valid x.509 certificate.
The Internal Authentication for Replica Set
Part two of this series discussed external authentication for MongoDB. This section will explain the process of configuring MongoDB for internal authentication with the replica set member, also known as shared cluster.
The two methods of internal authentication for MongoDB that will be explained in this tutorial are:
- Keyfile-based Authentication.
- X.509 Certificate Based Authentication.
How to enable the Keyfile-based replica set authentication mechanism
Part one of this series explained the keyfiles uses the SCRAM-SHA-1 challenge and response mechanism where the keyfiles act as a shared password for each member of the replica set.
Note that Keyfiles can be in between six and 1024 characters on a base64 character set only.
To generate a keyfile use openssl as shown in the following command:
1 | openssl rand -base64 755 > mongoKeyfile |
This command uses openssl command to generate a random 1024 character string on base64 character set then redirects the result to “mongoTest-keyfile”. After creating the file, set its attribute to “read only” with the following command:
1 | chmod 400 mongoKeyfile |
The chmod 400
command instructs the system to protect the file from being overwritten.
Next create the members of the replica set with the following command:
1 | mkdir -p rs1/db rs2/db rs3/db |
This will create a folder for each database and each folder will contain a sub-folder “db”. The -p
option for the mkdir
command will automatically create a folder and will not return any error, even if a folder already exits.
Use the following command to initiate the first replica set:
1 | mongod --replSet replSet1 --dbpath ./rs1/db --logpath ./rs1/mongod.log --port 27017 --fork --keyFile ./mongoKeyfile |
The result should resemble the following:
1 2 3 | about to fork child process, waiting until server is ready for connections. forked process: 8145 child process started successfully, parent exiting |
The commands for the rest of the replica members are as follows:
- 2nd replica set
1 | mongod --replSet replSet1 --dbpath ./rs2/db --logpath ./rs2/mongod.log --port 27018 --fork --keyFile ./mongoKeyfile |
- 3rd replica set
1 | mongod --replSet replSet1 --dbpath ./rs3/db --logpath ./rs3/mongod.log --port 27019 --fork --keyFile ./mongoKeyfile |
Notice the values of the --dbpath
, --logpath
and the --port
are changed accordingly.
Now initiate the replica sets via Mongo Shell. Initiate the first replica set by connecting to Mongo shell with the following default value:
1 | mongo |
Connecting to the default port in this way, that is the same port the first replica set was sent to, will connect to the first replica set.
1 2 3 4 5 | MongoDB shell version v4.0.10 connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("bdb5b82a-fdbd-4774-b59e-4e999d886a87") } MongoDB server version: 4.0.10 > |
Execute the following rs.initiate()
command to initiate the replica set:
1 2 3 4 5 6 7 | > rs.initiate() { "info2" : "no configuration specified. Using a default configuration for the set", "me" : "localhost:27017", "ok" : 1 } replSet1:SECONDARY> |
The first replica is now initiated, however, members cannot be added as this method of initiation automatically enables the client authorization. Leverage the “localhost exception” and create the first user and then authenticate to that user. To do this, first switch to admin database via the use admin
command. Now execute the following command:
1 2 3 4 5 6 7 | db.createUser( { user: "yeshua", pwd: "password", roles: [ { role: "root" , db: "admin" } ] } ) |
Now authenticate against the user that was just created with the following command:
1 | db.auth('yeshua','password') |
Now that the user can be authenticated with the role as root
, the members of the replica set can now be added using the replica set “add” command. Do this by specifying the host name and the port number of the second member with the following command:
1 | rs.add("localhost:27018") |
The result should resemble the following:
1 2 3 4 5 6 7 8 9 10 11 | { "ok" : 1, "operationTime" : Timestamp(1561691510, 1), "$clusterTime" : { "clusterTime" : Timestamp(1561691510, 1), "signature" : { "hash" : BinData(0,"URedJ5udu2qkdaiNmkdTxytjfHA="), "keyId" : NumberLong("6707408872354611202") } } } |
Now repeat the steps to add the other members.
With all the member now being successful added, with all users using a shared keyfile for authentication, confirm all entries have been successful by executing the rs.status()
command.
The result should resemble the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | { "set" : "replSet1", "date" : ISODate("2019-06-28T03:28:53.162Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : ", "syncSourceHost" : ", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1561692525, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1561692525, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1561692525, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1561692525, 1), "t" : NumberLong(1) } }, "lastStableCheckpointTimestamp" : Timestamp(1561692485, 1), "members" : [ { "_id" : 0, "name" : "localhost:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 3497, "optime" : { "ts" : Timestamp(1561692525, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-06-28T03:28:45Z"), "syncingTo" : ", "syncSourceHost" : ", "syncSourceId" : -1, "infoMessage" : ", "electionTime" : Timestamp(1561690323, 2), "electionDate" : ISODate("2019-06-28T02:52:03Z"), "configVersion" : 5, "self" : true, "lastHeartbeatMessage" : " }, { "_id" : 1, "name" : "localhost:27018", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 1022, "optime" : { "ts" : Timestamp(1561692525, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1561692525, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-06-28T03:28:45Z"), "optimeDurableDate" : ISODate("2019-06-28T03:28:45Z"), "lastHeartbeat" : ISODate("2019-06-28T03:28:51.738Z"), "lastHeartbeatRecv" : ISODate("2019-06-28T03:28:52.267Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : ", "syncingTo" : "localhost:27017", "syncSourceHost" : "localhost:27017", "syncSourceId" : 0, "infoMessage" : ", "configVersion" : 5 }, { "_id" : 2, "name" : "localhost:27019", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDurable" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2019-06-28T03:28:51.867Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "Error connecting to localhost:27019 (127.0.0.1:27019) :: caused by :: Connection refused", "syncingTo" : ", "syncSourceHost" : ", "syncSourceId" : -1, "infoMessage" : ", "configVersion" : -1 } ], "ok" : 1, "operationTime" : Timestamp(1561692525, 1), "$clusterTime" : { "clusterTime" : Timestamp(1561692525, 1), "signature" : { "hash" : BinData(0,"j0kO1ED+52O+TryC+H30Y7jZk5o="), "keyId" : NumberLong("6707408872354611202") } } } |
The results should show both the primary and secondary replica sets.
How to Enable the X.509 Certificate Based Authentication for a Replica Set
The previous section demonstrated how to enable keyfile-based authentication for a replica set. This section will explain how to enable the X.509 certificates to authenticate the members of a replica set or shared cluster.
Similar to client-base X509 authentication, as discussed in parts one and two of this series, X509 requires a MongoDB version that was compiled with support for TLS/SSL.
NOTE: Valid x.509 certificates are required.
Below are the pre-generated keyfiles that will be used in this section:
certAuth.pem — This is for the certificate authority for the public key.
client.pem — This is the client key.
rplsetMem1.pem, rplsetMem2.pem, rplsetMem3.pem — This is the certificate for the members of the replica set.
Now, performing the same process covered in the previous section, begin the process of enabling the X509 internal authentication.
First, create folders for the members with the following command:
1 | mkdir -p rs1/db rs2/db rs3/db |
Now start the first member off with the following command:
1 | mongod --replSet rplSetCert --dbpath ./rs1/db --logpath ./rs1/mongod.log --port 27017 --fork --sslMode requireSSL --clusterAuthMode x509 --sslPEMKeyfile rplsetMem1.pem sslCAFile certAuth.pem |
Except for using requireSSL
, most of the code is the same as the code used in the previous section Enabling the Keyfile-based replica set Authentication Mechanism.
Here X509 authentication is specified the same way as the authorization mode (--clusterAuthMode
). Pass the first member’s certificate rplsetMem1.pem
followed by the certAuth.pem
file.
The result should resemble the following:
1 2 3 | about to fork child process, waiting until server is ready for connections. forked process: 8145 child process started successfully, parent exiting |
Now that the first member is set up, repeat the procedure with the remaining two members as follows:
The 2nd Member
1 | mongod --replSet rplSetCert --dbpath ./rs2/db --logpath ./rs2/mongod.log --port 27018 --fork --sslMode requireSSL --clusterAuthMode x509 --sslPEMKeyfile rplsetMem2.pem sslCAFile certAuth.pem |
The 3rd Member
1 | mongod --replSet rplSetCert --dbpath ./rs3/db --logpath ./rs3/mongod.log --port 27019 --fork --sslMode requireSSL --clusterAuthMode x509 --sslPEMKeyfile rplsetMem3.pem sslCAFile certAuth.pem |
NOTE: Be certain to only change the --dbpath
, --logpath
, --port
, and the Keyfiles.
Now all the members have been successfully started. Connect to and initialize all the members and add each one exactly like in the previous section.
Because the commands are starting to get longer, and becoming more challenging to read, this section will cover leveraging the configuration files.
The configuration file will resemble the following:
1 2 3 4 5 6 7 | security: clusterAuthMode x509 net: ssl: mode: requireSSL CAFile: ca.pem clusterFile: replsetMem1.pem |
The configuration files have the top-level “security” field that is used to specify the authorization mode with another top level “net” and sub-field of “ssl” having sub-fields of “mode”, “CAFile” and “clusterFile”.
Using a configuration file in this manner will save time and prevent mistakes.
Now save the file and use the following command to use the config file:
1 | mongod --config rplSetConfig.yaml |
Conclusion
Part three of this tutorial “Learn How to Secure MongoDB Community Version using Authentication” explained how to enable internal authentication using the keyfile-based and X509-based MongoDB authentication mechanisms. This section also covered how to use configuration files in order to save time and prevent making mistakes when configuring a replica set or shared cluster.
Pilot the ObjectRocket Platform Free!
Try Fully-Managed CockroachDB, Elasticsearch, MongoDB, PostgreSQL (Beta) or Redis.
Get Started