How to install Elasticsearch 5 and Kibana on Homestead (Vagrant)

After spending some time with the console and elasticsearch that I needed to install for my new Laravel app, I ran into a lot of issues, and most of the them were related to memory.

In this tutorial I will give instructions on how to setup Elasticsearch 5.3.0 which is the latest at the time.

After this, you can use any Elasticsearch client (there are a couple of good ones for Laravel) to connect.

To start with, SSH into your server (via homestead or any other linux server) and do the following commands:

#Go root! (Keep in mind that from here we are doing everything as root so you should see your user as root@homestead instead of vagrant@homestead)

sudo –s

#Elasticsearch runs on Java so we need to install Java first (Java 8)

apt-get install default-jre

#Check if Java installed successfully (You should get a readable message with Java version)

java — version

#Get the key

wget -qO — https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –

#Get your server up to date

apt-get update

#Download your new elasticsearch package

#NOTE: You could use apt-get install elasticsearch to pull the package, but it will install 1.7.3 version which is really old and you wouldn’t be able to follow this tutorial.

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.deb

#Unpack it

sudo dpkg -i elasticsearch-5.3.0.deb

Use the update-rc.d command to configure Elasticsearch to start automatically when the system boots up:

sudo update-rc.d elasticsearch defaults 95 10

You can start the service like this as well

/etc/init.d/elasticsearch start

In order to check if elasticsearch is running you should use the following command:

curl –XGET ‘http://localhost:9200

Elasticsearch 5 will be listening on port 9200.

If everything is OK you will see:

vagrant@homestead:~/Code/quora$ curl -XGET ‘http://localhost:9200'
{
“name” : “wftWClj”,
“cluster_name” : “quora”,
“cluster_uuid” : “XEjMkD5sTTeTixHcW9rlBQ”,
“version” : {
“number” : “5.3.0”,
“build_hash” : “3adb13b”,
“build_date” : “2017–03–23T03:31:50.652Z”,
“build_snapshot” : false,
“lucene_version” : “6.4.1”
},
“tagline” : “You Know, for Search”
}

Now even this seems easy, right (and it is). However, often you could run into issues. If you did run into any issues, please keep reading.

Once you start the service, you can check on its progress by typing:

journalctl –unit elasticsearch

If there are any problems with starting elasticsearch, this is the first place to check. 
 
 Now you could run into something like this:

Apr 05 11:24:07 homestead systemd[1]: Starting Elasticsearch…
Apr 05 11:24:07 homestead systemd[1]: Started Elasticsearch.
Apr 05 11:24:44 homestead systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
Apr 05 11:24:44 homestead systemd[1]: elasticsearch.service: Unit entered failed state.
Apr 05 11:24:44 homestead systemd[1]: elasticsearch.service: Failed with result ‘signal’.

This doesn’t tell us a lot so we need to do another check:

Type:

cd /var/log/elasticsearch

If you haven’t touched any configuration for elasticseach such as changing cluster name or node names etc, (which is in /etc/elasticsearch/elasticsearch.yml btw), you should see a file name elasticsearch.log.
 
 Use any editor to open it up. You can type:

nano elasticsearch.log

In here you can get more information about the issue such as:

Apr 06 11:55:24 cjenovnik-live systemd[1]: Starting Elasticsearch…
Apr 06 11:55:24 cjenovnik-live systemd[1]: Started Elasticsearch.
Apr 06 11:55:25 cjenovnik-live elasticsearch[3297]: Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000085330000, 2060255232, 0) failed; error=’Cannot allocate memory’ (errno=12)
Apr 06 11:55:25 cjenovnik-live elasticsearch[3297]: #
Apr 06 11:55:25 cjenovnik-live elasticsearch[3297]: # There is insufficient memory for the Java Runtime Environment to continue.
Apr 06 11:55:25 cjenovnik-live elasticsearch[3297]: # Native memory allocation (mmap) failed to map 2060255232 bytes for committing reserved memory.
Apr 06 11:55:25 cjenovnik-live elasticsearch[3297]: # An error report file with more information is saved as:
Apr 06 11:55:25 cjenovnik-live elasticsearch[3297]: # /tmp/hs_err_pid3297.log
Apr 06 11:55:25 cjenovnik-live systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Apr 06 11:55:25 cjenovnik-live systemd[1]: elasticsearch.service: Unit entered failed state.
Apr 06 11:55:25 cjenovnik-live systemd[1]: elasticsearch.service: Failed with result ‘exit-code’.
Apr 06 11:59:12 cjenovnik-live systemd[1]: Stopped Elasticsearch.

This is a log from my production server with 1GB of RAM. By default, elasticsearch will try to allocate 2GB for Java. Make sure you change this (You can read more about this here: https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html)

by typing:

nano /etc/elasticsearch/jvm.options
################################################################
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms128m
-Xmx128m

As you can see here, I changed the last two rows from –Xms2g to –Xms128m and –Xmx2g to –Xmx128m

This is great for production.

In case you see something like this, you can try the commands below: (This made it work on homestead)

[2016–04–28 14:44:43,641][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
 [2016–04–28 14:44:43,641][WARN ][bootstrap ] This can result in part of the JVM being swapped out.
 [2016–04–28 14:44:43,641][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
 [2016–04–28 14:44:43,641][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example: 
 # allow user ‘elasticsearch’ mlockall
 elasticsearch soft memlock unlimited
 elasticsearch hard memlock unlimited

Open up /etc/security/limits.conf file and insert the two commands

elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

And this is how we install Elasticsearch on Homestead (or any other linux debian).


Now Kibana is a nice tool that goes very well with Elasticsearch as it can help you play with indexed data, make queries and test them. It is pretty much made for developers.

For 64bit version use this (you should still be logged in as root )

wget https://artifacts.elastic.co/downloads/kibana/kibana-5.0.2-amd64.deb
dpkg -i kibana-5.0.2-amd64.deb

Use the update-rc.d command to configure Kibana to start automatically when the system boots up:

sudo update-rc.d kibana defaults 95 10

Kibana is configured for you out of the box to listen for elasticsearch on localhost by default, however, homestead usually suggests your projects to run under 192.168.10.10 IP address so what you need to do is

nano /etc/kibana/kibana.yml

and edit the server.host: 192.168.10.10

# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is ‘localhost’, which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: 192.168.10.10
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: “”
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server’s name. This is used for display purposes.
#server.name: “your-hostname”

It will listen on :5601 (You can also set a virtual host pointing to a domain — in your hosts file) meaning http://192.168.10:10:5601

Then you should be able to start kibana manually like this:

/etc/init.d/kibana start

And that’s all for now. Watch out for those memory issues and keep googling !

Feel free to follow me on Twitter or LinkedIn.