Elasticsearch - How to Upload CSV Data Using Kibana
This tutorial will help you easily learn how to upload CSV data using the Kibana UI and import your semi-structured text into Elasticsearch directly from the Kibana utility. Uploading you CSV data will allow you to organize your time-sensitive information, view your files across various dashboards and display the results as visualized data in a simplified and easy to read format.
Elastic’s SaaS provider, formerly called “Found.no” is now known as Elastic Cloud.
Version Note: Also be aware that the add-on will not automatically function with versions 5.x and above. In order to take advantage of the upgrade feature, and receive the full benefits of 5.x using the add-on, you must first index all your data into a 2.4.4 cluster and then install the upgrade.
- The ELK complete stack (Elasticsearch, Logstash, and Kibana) must be properly installed and running correctly.
- Visit your server’s web address with
:5601(or the port on which Kibana is running on) at the end of the URL to make certain the Kibana UI is operating properly.
- For example:
- Also conform that the default port for Elasticsearch is
9200, so you can confirm your cluster’s control settings by navigating to
http://localhost:9200in your browser.
You can also use the following command in the terminal to verify the status of Kibana:
`bash ps -ef | grep kibana
From the Kibana UI, click on the “Management” tab to confirm the current version of the ELK stack.
Adding CSV Data:
>The Kibana’s Visualize page assists in creating visualization in the form of charts and graphs. These visualizations are easily saved for viewing individually or can be utilized in various dashboards that act as a collection of the visualizations.
The newest Kibana UI version allows you to easily upload CSV data to your Elasticsearch cluster. From the left-side console, click “Machine Learning” and then click on the Data-Visualizer tab.
The composition of the CSV is interpreted by Kibana to make the first line (header row) of the file translate to the fields of the index: For example, the structure of the index would resemble something like this:
GeSHi Error: GeSHi could not find the language csv (using path /nas/content/live/orkbprod/wp-content/plugins/codecolorer/lib/geshi/) (code 2)
- Elasticsearch is also designed to manage time-sensitive data, so it is prudent to include some type of timestamp field in each header row. The date format will be dependent on how you have the index mapped out.
Caution: The values in the CSV file must be UTF-8 encoded, otherwise you will experience system errors while uploading data.
- As this function only support lowercase, make sure you do not use any special characters or uppercase letters or numerals when you name your indexes.
Once the upload is finished you will be able to verify your data was successfully moved to your cluster by executing a
GETrequest followed by the index name in the Kibana console.
You can also obtain other information about the index, such as the document count:
- The JSON object returned by Kibana (in the right-side output panel) should look something like this:
GeSHi Error: GeSHi could not find the language json (using path /nas/content/live/orkbprod/wp-content/plugins/codecolorer/lib/geshi/) (code 2)
This tutorial explained how you will be able to upload CSV data using Kibana and import semi-structured text into Elasticsearch directly from Kibana. Once mastered, the Kibana program will help you to organize your time-sensitive data and allow you to view the files across multiple dashboards and display the results in a simplified visual format of charts and graphs. However, it is critical for you to bear in mind that you must first index your data into a 2.4.4 cluster before installing the upgrade with versions 5.x and higher in order to receive the benefits of the add-on.