Configure Elasticsearch memory¶
On typical TeskaLabs LogMan.io deployments, Elasticsearch is the largest consumer of RAM. When you scale hardware up or down, adjusting Elasticsearch memory limits is a routine part of capacity planning.
Restarts required
Changing these settings restarts each affected Elasticsearch container.
Schedule enough uninterrupted time for this work. The full round of restarts can take a long time on large clusters.
Apply changes one instance at a time, and complete one tier before the next. After each restart, confirm cluster health in Kibana monitoring before continuing.
When you restart hot nodes, Kibana or monitoring views may be briefly unavailable because they rely on data served from hot tier nodes. This is expected; wait for the cluster to return to green before proceeding.
Do not overcommit host memory
The defaults described here assume 256 GB RAM on the host. Many installations use smaller machines. Always reconcile heap and container limits with actual RAM.
Default settings¶
The default layout assumes 256 GB host RAM and four Elasticsearch roles on each node: master, hot, warm, and cold.
For a three-node cluster, /Site/model.yaml includes an instance per role on each node—for example:
Example: default topology in model.yaml (three nodes, twelve instances)
elasticsearch:
instances:
master-1:
node: node1
hot-1:
node: node1
warm-1:
node: node1
cold-1:
node: node1
master-2:
node: node2
hot-2:
node: node2
warm-2:
node: node2
cold-2:
node: node2
master-3:
node: node3
hot-3:
node: node3
warm-3:
node: node3
cold-3:
node: node3
Two limits¶
You configure two different caps:
- JVM heap — memory reserved for the Elasticsearch Java process.
- Docker container memory — cgroup limit for the whole container.
Rule of thumb: set the container limit to about 2.5–3× the heap. Heap is locked for Elasticsearch; the remainder covers off-heap use, OS page cache inside the container, and overhead. Other services on the host share memory outside these containers.
Default heap sizes (container limits are unset until you add them explicitly):
| Tier | Heap limit | Docker container limit |
|---|---|---|
| master | 2 GB | (not set) |
| hot | 28 GB | (not set) |
| warm | 28 GB | (not set) |
| cold | 28 GB | (not set) |
Recommended configuration¶
| Tier | Heap limit | Docker container limit |
|---|---|---|
| master | 2 GB | 6 GB |
| hot | 28 GB | 80 GB |
| warm | 28 GB | 80 GB |
| cold | 28 GB | 80 GB |
Enable bootstrap.memory_lock, disable swap for the container, and set memlock ulimits as in the examples below.
Full example: per-instance descriptor (three nodes)
elasticsearch:
instances:
master-1:
node: node1
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 6g
memswap_limit: 6g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
hot-1:
node: node1
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 80g
memswap_limit: 80g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
warm-1:
node: node1
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 80g
memswap_limit: 80g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
cold-1:
node: node1
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 80g
memswap_limit: 80g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
master-2:
node: node2
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 6g
memswap_limit: 6g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
hot-2:
node: node2
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 80g
memswap_limit: 80g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
warm-2:
node: node2
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 80g
memswap_limit: 80g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
cold-2:
node: node2
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 80g
memswap_limit: 80g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
master-3:
node: node3
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 6g
memswap_limit: 6g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
hot-3:
node: node3
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 80g
memswap_limit: 80g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
warm-3:
node: node3
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 80g
memswap_limit: 80g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
cold-3:
node: node3
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 80g
memswap_limit: 80g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
The same policy can be expressed more compactly by merging common descriptor keys at the service level and overriding only what differs (for example, mem_limit on masters):
Compact example: shared descriptor with master overrides
elasticsearch:
descriptor:
environment:
bootstrap.memory_lock: true
mem_limit: 80g
memswap_limit: 80g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
instances:
master-1:
node: node1
descriptor:
mem_limit: 6g
memswap_limit: 6g
hot-1:
node: node1
warm-1:
node: node1
cold-1:
node: node1
master-2:
node: node2
descriptor:
mem_limit: 6g
memswap_limit: 6g
hot-2:
node: node2
warm-2:
node: node2
cold-2:
node: node2
master-3:
node: node3
descriptor:
mem_limit: 6g
memswap_limit: 6g
hot-3:
node: node3
warm-3:
node: node3
cold-3:
node: node3
On LogMan.io v25.47 and v26.12, implement the per-tier limits manually in model.yaml, using the preceding examples as templates. The planned v26.?? release is expected to ship Elasticsearch 8+ and apply the same targets automatically, unless you override them in the model.
| LogMan.io | v25.47.x | v26.12.x | v26.?? (planned) |
|---|---|---|---|
| Elasticsearch version | 7.17.28 | 7.17.28 | 8.x (TBD) |
| Compatibility with Elasticsearch 8+ | No | Yes | Yes |
| Memory configuration | Manual | Manual | Automatic |
Downscaling¶
The following worked example uses a 192 GB host.
- Reserve a small margin for the OS and non-Elasticsearch services (example: 6 GB):
192 − 6 = 186 GBremaining. - Allocate 6 GB for the master container:
186 − 6 = 180 GBfor the three data-tier containers combined. - Split evenly across hot, warm, and cold on that node:
180 ÷ 3 = 60 GBper data container. - Set heap to roughly one-third of that container limit (consistent with the 3× rule of thumb):
60 ÷ 3 = 20 GBheap per data node.
Resulting targets:
| Tier | Heap limit | Docker container limit |
|---|---|---|
| master | 2 GB | 6 GB |
| hot | 20 GB | 60 GB |
| warm | 20 GB | 60 GB |
| cold | 20 GB | 60 GB |
Set ES_JAVA_OPTS so -Xms and -Xmx both match the heap (for example, -Xms20g -Xmx20g for 20 GB). Elasticsearch won't start with mismatched heap settings.
Downscaled example: model.yaml fragment (192 GB host)
elasticsearch:
descriptor:
environment:
bootstrap.memory_lock: true
ES_JAVA_OPTS: "-Xms20g -Xmx20g"
mem_limit: 60g
memswap_limit: 60g
mem_swappiness: 0
ulimits:
memlock:
soft: -1
hard: -1
instances:
master-1:
node: node1
descriptor:
mem_limit: 6g
memswap_limit: 6g
hot-1:
node: node1
warm-1:
node: node1
cold-1:
node: node1
master-2:
node: node2
descriptor:
mem_limit: 6g
memswap_limit: 6g
hot-2:
node: node2
warm-2:
node: node2
cold-2:
node: node2
master-3:
node: node3
descriptor:
mem_limit: 6g
memswap_limit: 6g
hot-3:
node: node3
warm-3:
node: node3
cold-3:
node: node3