How To Specify The Number Of Shards And Number Of Replicas Per Shard In Elasticsearch

Have a Database Problem? Speak with an Expert for Free
Get Started >>

Introduction

When installing this service, it is important to know how to specify the number of shards and number of replicas per shard in Elasticsearch. Some background is covered with a few definitions and clarifications. Then we present several common cases and provide our recommendations for each. Try to remember these definitions while reading through the instructions that follow below.

How many shards should I use?

Choosing the number of shards to use is an important topic. A mistake in this decision will have a direct relationship to future scaling obstacles when the dataset starts to expand naturally.

Horizontal scaling, or scaling out, is the main reason to shard a database. Having applications so reliant on databases, a way to protect the entire system in case of an outage is to scale out. Most of the time an outage will only affect the single shard, keeping the application alive and functional.

The shard quantity, or the number of Lucene indices, can affect performance depending on how large a cluster you have because the more data is spread out across indices the more server resources need to be allotted for management of files and duplicate metadeta.

Elasticsearch uses Lucene, and the way indices function will not allow an actual split up of an index to distribute it across different nodes in the cluster. Elasticsearch circumvents this limitation by establishing multiple indices, or shards, which are Lucene indices.

Multiple indices will have a profound impact on performance because the Elasticsearch index is distributed across more than one Lucene index in order to run a complete query. First, Elasticsearch must query each shard or Lucene index individually to combine their results and then finally put a score to the overall collection. This means using more shards than 1 automatically diminishes performance. However, the tradeoff gains you the ability to distribute the index across multiple nodes, and there are ways the performance hit can be mitigated somewhat (more on that later).

A good practice is to ensure the amount of shards for each node stays below 20 per GB of heap that is configured. Thus, a node with 30GB of heap should have a maximum shard count no higher than 600, and the further from this limit you stay the better. This helps the cluster generally remain in better health.

We are often asked “How big of a cluster do I really need?”, and it is typically hard to respond with more than “Well, it depends!” There are so many variables concerning an application’s particular workload and expectations of performance. Just as important is the number of documents and their average size. This article doesn’t offer a specific answer or formula for calculating this, but we do provide the questions you should ask yourself and provide tips for finding the answers.

The atomic scaling unit is the shard of an Elasticsearch index. A whole Lucene index is a shard. If you’re not familiar with the interaction between Elasticsearch and Lucene on the shard level, try reading “Elasticsearch from the Bottom Up.” Since the jargon can be rather ambiguous at times, we’ll make sure to be clear whether we’re discussing an index that’s Lucene or Elasticsearch.

Understanding that an entire shard is an entire Lucence index will be important for multiple reasons to be explained. First, it makes it obvious that sharding comes at a price because storing the duplicate data in two different Lucene indexes costs more than double as storing it in a single index is. Lucene index internals are like term dictionaries, which must be duplicated, and there’s a cost for having more files needing maintained and more metadata that memory is spent on.

A common question among most Elasticsearchers with an index is “How many shards are best to use?” We’ll explain the performance consequences and design tradeoffs encountered when changing shard numbers. Continue reading to learn how we optimize your strategy relating to shards.

How Many Shards Are There?

Every shard will have its own respective number of replicas to prevent data loss, which means that if you setup an index with 4 shards, and each has two replicas, then it really means that your index has 12 shards, but only 4 shards will be in active use at a given time.

To understand how the replicas concept works; if you have your index set up with 3 shards with 1 replica each. That means you actually have 6 shards, even though only 3 are ever being actively used at a given time. The safest failover with 3 shards is to have 2 replicas on each node, so 1 is active and the other 2 are replicas of different active shards.

Optimal number of Shards per node:

NOTE: Give careful consideration to the rate of your database’s growth, to your system limits, and to the number of shards you currently have when you’re allocating shards.

  • If you changed the amount of shards after creating your indices, you’ll have to re-index all the source documents because the primary shard configuration, conceptually, is similar to a partition on a hard disk. Like a drive’s partition, once an index is created it cannot be changed,
  • Re-Sharding: There is a balance that has to be maintained. Every created shard will decrease performance since each shard is competing for your server’s resources.
  • When you’re planning for capactiy, try and allocate shards at a rate of 150% to 300% (or about double) the number nodes that you had when initially configuring your datasets

JVM Heap Size:

  • Be modest when over-allocating in anticipation of growth for your large data sets, unless you truly anticipate rapid data growth.
  • The ideal JVM Heap Size is around 30GB for Elasticsearch.
  • For most uses, a single replica per shard is sufficient.

Changing Default Number of Shards on an Index:

Specify Default Number of Shards in the Configuration File (Only for Elasticsearch version 4 or older)

The two settings in the .yml file that are the focus of this tutorial are:

number_of_shards and number_of_replicas.

  • The ideal method is to explicitly specify the variable index.number_of_shards in Elasticsearch’s configuration file before you even begin to setup indexes
  • The .yml file used to configure Elasticseach is located at /etc/elasticsearch/elasticsearch.yml on Linux (for Homebrew installations of the ELK on macOS, the config file should be located in the subdirectory of /usr/local/Cellar/elasticsearch/{4.x.x}/libexec/config (Replace {6.x.x} with the correct version number directory).
  • Depending on your macOS installation, and the version of ELK you installed, the configuration file may also be located at /usr/local/Cellar/elasticsearch/ or possibly even /usr/local/etc/.
  • If you are going to run the stack on a Linux terminal it’s easy to use the nano text editor in terminal to alter the configuration file once you’ve securely accessed your server with SSH and a private key:
1
sudo nano edit elasticsearch.yml
  • Look for the shard and index values in the file and change them. When finished, if you press CTRL + O the changes can be saved in nano.

NOTE: The location for the .yml file that contains the number_of_shards and number_of_replicas values may depend on your system or server’s OS, and on the version of the ELK Stack you have installed.

NOTE: Elasticsearch 5 and newer NO LONGER allows you to set the default number of shards and replicas for a newly created index by changing the config file.

  • To ensure that all of the values for all of the indices are properly updated, use the cURL API request like this:
1
2
3
4
5
6
7
8
9
10
curl -XPUT 'http://{YOUR_DOMAIN}:9200/_all/_settings?preserve_existing=true' -d '{


"index.number_of_shards" : "3",


"index.number_of_replicas" : "1"


}'

Specifying for a newly created index:

  • You can set number_of_shards and number_of_replicas at the time of index creation as well:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
curl -XPUT "$(hostname -I):9200/myindex/_settings?pretty" -H 'Content-Type: application/json' -d'


{


"index" : {


"number_of_shards" : 3,


"number_of_replicas" : 1


}


}

>changing index.number_of_shards defaults in the configuration file involves changing settings on every node before then doing a restart of the instance

Specify Default Number of Shards Using an Index Template

NOTE: Only a primary shard can accept an indexing request–not replica shards. Although both types can serve querying requests.

  • Use an index template to modify a new index’s default number of shards by creating a new template, as shown in this PUT request:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
PUT /_template/index_defaults


{


"index_patterns": "foo*",


"settings" : {


"index" : {


"number_of_shards" : 1,


"number_of_replicas" : 2


}


}


}

NOTE: The index_patterns field is useful for defining glob style patterns

Conclusion

Taking the time to know how to specify the shards number and replicas per shard in Elasticsearch is a critical step. It will enable the service to be scalable without running into too many obstacles. This becomes important rather quickly when the volume is initially underestimated when the data is always expanding in size. This article shows the steps necessary to be sure this aspect of Elasticsearch is correctly configured. To learn more on Elastic Stack and how to implement its contents, please look at our other guides on related products.

Pilot the ObjectRocket Platform Free!

Try Fully-Managed CockroachDB, Elasticsearch, MongoDB, PostgreSQL (Beta) or Redis.

Get Started

Keep in the know!

Subscribe to our emails and we’ll let you know what’s going on at ObjectRocket. We hate spam and make it easy to unsubscribe.