deploying Elasticsearch and Kibana with Docker

In the good old days, deploying servers usually involve a set of Metal-Box(s), also they are allocated in a very “safe” place (usually data centre) to avoid all sorts of natural or human disturbances. This is the time when the infrastructure team handles all the hardware purchases and picking up the responsibility to look after the production machines. This is also the time when scaling up (scaling down is a very very rare use case) is a challenge as hardware capabilities hinder how much the production server(s) go.

Gone were the days~ The modern approach to deploy servers (or micro-services) would be cloud focused — using container technologies. Talking about container technologies, Docker should be one of the most popular solutions in the realm plus most cloud providers (e.g. AWS, GCP and Azure) support Docker deployments. Today we will dive into a topic on deploying Elasticsearch and Kibana with Docker containers.

PS. the docker-compose yml(s) are available here:

Official Docker images are available at

We can discover the product’s version through the “Explore namespace” feature.

now all we need is to fetch the images of a specific versions:

docker pull

After a few minutes, we should have the image elasticsearch versioned 7.11.1 downloaded to our local repository.

PS. to install Docker CLI, check here

Let’s make things easy, we will setup a 1-node cluster first. If you have experience in running elasticsearch, you know that we need a configuration file namely elasticsearch.yml. As luck would have it, now the challenge of using a Docker elasticsearch image is … how to update the configuration file???

We will pick the docker-compose configuration approach here, open a text editor and save the following into a file named docker-compose.yml:

A docker-compose file declares what services / containers to be run with what resources (e.g. data volumes and networks). Looking at the above file, the “environment” section is exactly where we provide the elasticsearch.yml configurations. We simply supply the node name, cluster name and the MOST important config “discovery type”. Since we are going to startup a 1-node cluster, the discovery type should be “single-node”.

We can now spin up the service (node01) by issuing:

docker-compose up

Within a minute, we should see our node01 elasticsearch node online. In order to verify whether the cluster is formed, run a command through curl:

curl localhost:9200

… results …
“name” : “node01”,
“cluster_name” : “es-cluster-7”,
“cluster_uuid” : “Uz5DlxTDSlG4mjE_IJufrg”,
“version” : {
“number” : “7.11.1”,
“build_flavor” : “default”,
“build_type” : “docker”,
“build_hash” : “ff17057114c2199c9c1bbecc727003a907c0db7a”,
“build_date” : “2021–02–15T13:44:09.394032Z”,
“build_snapshot” : false,
“lucene_version” : “8.7.0”,
“minimum_wire_compatibility_version” : “6.8.0”,
“minimum_index_compatibility_version” : “6.0.0-beta1”
“tagline” : “You Know, for Search”

Awesome, we just got our elasticsearch cluster running. But would it be better to have a kibana instance attached to this cluster??? We probably want to draft Queries in UI rather than the command line.

To shutdown the cluster, issue the following:

docker-compose down

First of all, fetch the kibana image from docker repository as follows:

docker pull

Next, update the docker-compose.yml and add back the kibana service:

Start up both services by:

docker-compose up

Within a minute or 2, we should be able to see logs displaying the status of our elasticsearch cluster as well as the kibana startup process. To access the kibana instance, open a web browser and key in: http://localhost:5601

Cool~ Now we can run Queries and Visualisations!

A 1-node cluster is cool for development but not production ready. We probably want our cluster to be error-prone and still running under critical situations; hence a 3-node cluster is something inevitable.

Update our docker-compose.yml as follows:

What we really did is to add back node02 and node03, the corresponding data volumes for the nodes, exposing the transport channel’s port (e.g. 9300) PLUS updating the discovery approach — simply adding back discovery.seed_hosts and cluster.initial_master_nodes configurations, for details please read

Restart the services by issuing:

docker-compose restart

Within 3 minutes; we should be running a 3-node cluster and a kibana. To verify the cluster formation; run the following command:

curl localhost:9200/_cat/nodes

… responses … 75 96 56 3.30 1.75 1.61 cdhilmrstw — node02 78 96 56 3.30 1.75 1.61 cdhilmrstw * node01 76 96 56 3.30 1.75 1.61 cdhilmrstw — node03

Throughout the test and error period, quite a bunch of questions are actually answered and I would like to share them with you :)

Q. when we try to configure elasticsearch.yml, official documents always refer to the default path “/usr/share/elasticsearch/config/”… but where is it?

A. the given path above is referring to the path inside the docker container. Which means not a local path in our OS (or hosting machine). So don’t mix up and try to create a folder in your OS and expect the yml would be consumed… ~no such thing~

Q. hm… can we directly update that elasticsearch.yml file within a running docker container then?

A. yes, after starting up the services by docker-compose command; we can do a ssh / telnet into the running service by issuing this:

docker-compose exec {{service_name}} /bin/bash

the yml file is located at /usr/share/elasticsearch/config/elasticsearch.yml which is the default path mentioned in the official documentation.

Q. is it recommended to set the path for data and log storage location to somewhere else?

A. not necessary for most cases, even though the default paths within the docker are under /usr/share/elasticsearch; however, depends on how we bind the volumes in docker-compose.yml, the actual data would be stored under our managed storage. In this tutorial, we are using the “local” driver which simply means storing everything under our OS’s storage. Thus… we could also bind the volumes to Cloud storages such as S3.

Q. why my service could not be accessible through 9200 port?

A. did you expose the 9200 ports correctly? If you look in the docker-compose.yml; each service has a “ports” section declaring which port number for this service would be exposed to the host (i.e. your hosting OS).

Q. um… I already expose the ports, but somehow inter-service communication is not working?

A. did your services use the same “network”? In the docker-compose.yml, each service should have a “networks” setting declaring which network this service could use to communicate with other service(s). In our use case, all services would be sharing the same network named “es-network”.

Q. the cluster is working now~ Hey but kibana seems not starting up correctly and complaining it could not connect to http://elasticsearch:9200.

A. did you set the elasticsearch.hosts configuration correctly? According to the official documentation, elasticsearch.hosts is set exactly to http://elasticsearch:9200 and that is the reason why… Somehow, the config key for this setting is “ELASTICSEARCH_HOSTS”.

Q. is there a quick and dirty way to remove all data stored for the cluster?

A. yes, we can run a docker command to list out all available volumes:

docker volume list

this command list out the volume names; then remove them by:

docker volume rm {{volume_1}} {{volume_2}}

technically your cluster is data-less now.

A nice journey accomplished and we have just done all the following:

  • understood why docker is essential for modern deployment,
  • tailor made our first docker-compose.yml with services declared,
  • understood how to declare volumes and network settings,
  • how to start, stop, restart services through docker-compose command,
  • adding Kibana support to the cluster,
  • finally, running a production ready 3-node cluster

By now we should have a healthy cluster running, the next step probably would be setting up the Monitoring module. We will go into that in the next coming journey, stay tuned :)

a java / golang / flutter developer, a big data scientist, a father :)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store