Skip to main content
Azure
  • 2 min read

Guidance for running Elasticsearch on Azure

The Microsoft Patterns & Practices group has provided guidance on implementing Elasticsearch and illustrates many of the challenges our customers face and how to work through them.

Elasticsearch is a scalable open source search engine and database that has been gaining popularity among developers building cloud-based systems. When suitably configured, it is capable of ingesting and efficiently querying large volumes of data very rapidly.

It’s reasonably straightforward to build and deploy an Elasticsearch cluster to Azure. You can create a set of Windows or Linux VMs, then download the appropriate Elasticsearch packages to install it on each VM. Alternatively, we published an ARM template you can use with the Azure portal to automate much of the process.

Elasticsearch is highly configurable, but we’ve witnessed many systems where a poor selection of options has led to slow performance. One reason for this is that there are many factors you need to take into account in order to achieve the best throughput and most responsive system, including:

  • The cluster topology (client nodes, master nodes and data nodes)
  • The structure of each index (the number of shards and replicas to specify)
  • The virtual hardware (disk capacity and speed, amount of memory, number of CPUs)
  • The allocation of resources on each cluster (disk layout, Java Virtual Machine memory usage, Elasticsearch queues and threads, I/O buffers)

You cannot consider these items in isolation, because the nature of workloads you are running will also have great bearing on the performance of the system. An installation optimized for data ingestion might not be well-tuned for queries, and vice versa. Therefore, you need to balance the requirements of the different operations your system needs to support. For these reasons, we spent considerable time working through a series of configurations, performing numerous tests and analyzing the results.

The purpose was to illustrate how you can design and build an Elasticsearch cluster to meet your own requirements, and to show how you can test and tune performance. This guidance is now available in Azure documentation. We provided a series of documents covering:

  • General guidance on Elasticsearch, describing the configuration options available and how you can apply them to a cluster running on Azure
  • Specific guidance on deploying, configuring, and testing an Elasticsearch cluster that must support a high level of data ingestion operations
  • Guidance and considerations for Elasticsearch systems that must support mixed workloads and/or query-intensive systems

We used Apache JMeter to conduct performance tests and incorporated JUnit tests written using Java. Then we captured the performance data as a set of CSV files and used Excel to graph and analyze the results. We also used Elasticsearch Marvel to monitor systems while the tests were running.

If you'd like to repeat these tasks on your own setup, the documentation provides instructions on how to create your own JMeter test environment and gather performance information from Elasticsearch, in addition to providing scripts to run our JMeter tests.