How To Define A Template for a New Logstash Index in Elasticsearch

Have a Database Problem? Speak with an Expert for Free
Get Started >>

Introduction

Elastic has enabled their services to actively collect data in real time and keep this information neatly separated as the user defines during the initial setup of compatible tools. This is made possible by the way the configuration is used to define template logstash. It can be used to monitor the incoming data streams and create files that keep a record of whatever input is generated. All that is required is to establish the parameters of what is to be kept during setup and from where. This is accomplished by the user when they are asked to define template logstash prior to deployment as detailed below in this How-To.

  • Elastic has some highly optimized default settings (such as the shard count) for newly created indices, but sometimes it’s convenient and useful to create our own custom templates for a brand new index.
  • Templates should not be confused with “mapping”–templates include default settings and mappings for the indices, while the latter merely define how a document’s fields are interpreted.
  • Index templates allow you template how new indices are created.

  • Templates are only used when a new index is create. It will take effect on an update. Any settings explicitly defined as part of the create index call will override any settings defined in the template.

Prerequisites:

  • Make sure to allow for custom templates by changing the template_overwrite value in your Logstash configuration file to true. The Logstash configuration file (logstash.yml) should be found in your logstash directory (the default path is: /etc/logstash/logstash.yml)
  • It’s a good idea to restart Logstash after making changes to its YAML file.

Creating A Template:

  • In this example, we’ll make a PUT request using cURL to create a template that will be used by Logstash when it creates an index:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
curl -X PUT "localhost:9200/_template/some_template" -H 'Content-Type: application/json' -d'


{


"index_patterns": ["abc*", "def*"],


"settings": {


"number_of_shards": 3


},


"mappings": {


"_doc": {


"_source": {


"enabled": false


},


"properties": {


"ip_address": {


"type": "keyword"


},


"created_at": {


"type": "date",


"format": "EEE MMM dd HH:mm:ss Z yyyy"


}


}


}


}


}


'

Create a Template from a Mapping:

It’s easy to convert a mapping into a template. Below is an example of what a mapping template. It contains an index pattern to match and your default mappings. For this demo, we’ve included a version and a refresh_interval of 5s.

  • In this example template, we’ll give it a default "mapping" with "user_name" and “age” fields
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
{


"template" : "some_template*",


"version" : 101,


"settings" : {


"index.refresh_interval" : "3s"


},


"mappings" : {


"_default_" : {


"properties": {


"user_name": {


"type": "text"


}


"age": {


"type": "integer"


}


}


}


}


}
  • You can also just save the template as a .json file, and pass the file (and its path) as the parameter in the PUT request instead of typing out the template, in its entirety, when using cURL:
1
curl -XPUT http://localhost:9200/_template/some_template?pretty -d @crazy_template.json
  • Use HEAD to determine whether the template exists:
1
HEAD _template/template_1

Conclusion

In this short explanation on the process for defining a new template, the basic steps necessary to reestablish the guidelines for the logstash to target the intelligence that is specified in its application parameters was shown. Keeping this critical addition current on all specifics of the data being collected will ensure the details being generated by the source(s) will be compiled in a usable format that is found in a singular place. Even when running multiple nodes and/or shards, this will deposit all of the relevant information that is coming in from each connection point where it can be readily perused by the authorized user. Once arranged and categorized streams have been divided into their designated save location, they can then be dissected and examined for the purposes they have been stored. By following the process laid out above this service will operate automatically to bring all the desired data in from where it originated to a centralized hub for analysis.

Pilot the ObjectRocket Platform Free!

Try Fully-Managed CockroachDB, Elasticsearch, MongoDB, PostgreSQL (Beta) or Redis.

Get Started

Keep in the know!

Subscribe to our emails and we’ll let you know what’s going on at ObjectRocket. We hate spam and make it easy to unsubscribe.