Setting up Elasticsearch Remote Cluster¶
Elasticsearch remote cluster functionality allows you to connect multiple Elasticsearch clusters together, enabling cross-cluster search and data access within TeskaLabs LogMan.io. This is useful for distributed deployments where you need to query data across different clusters.
Prerequisites¶
- ASAB Remote Control version 25.25.03 or newer
- Network connectivity: Both clusters must be able to communicate over the network
- SSL/TLS certificates: Both clusters must have SSL/TLS enabled for transport layer security
Overview¶
Setting up a remote cluster connection involves configuration on both sides:
- Central Cluster: The cluster that will connect to the remote cluster
- Remote Cluster: The cluster that will be accessed remotely, there can be multiple remote clusters connected to a central cluster.
Both clusters need to be configured with the remote_cluster_client role and exchange SSL certificates to establish a secure connection.
Remote Cluster Configuration¶
Node Roles¶
Add the remote_cluster_client role to all master nodes in the cluster. This should be done via the Remote Control in the model configuration:
elasticsearch:
instances:
master-1:
elastic:
node.roles:
- remote_cluster_client
xpack.security.transport.ssl.certificate_authorities:
- |
-----BEGIN CERTIFICATE-----
<CENTRAL_CLUSTER_CA_CERTIFICATE>
-----END CERTIFICATE-----
The complete node configuration should include:
cluster.name: <cluster-name>
discovery.seed_hosts: <master-node-host>:<transport-port>
http.port: 9200
network.host: <node-hostname>
node.name: elasticsearch-master-1
node.roles: master,ingest,remote_cluster_client
transport.port: 9400
xpack.security.enabled: true
xpack.security.transport.ssl.certificate: certs/certificate.pem
xpack.security.transport.ssl.certificate_authorities: certs/ca.pem
xpack.security.transport.ssl.enabled: true
Certificates¶
From the central cluster, copy the CA certificate from /opt/site/elasticsearch-master-1/certs/ca.pem and append it to xpack.security.transport.ssl.certificate_authorities in the remote cluster's model configuration.
The result should contain both certificates:
xpack.security.transport.ssl.certificate_authorities:
- |
-----BEGIN CERTIFICATE-----
<CENTRAL_CLUSTER_CA_CERTIFICATE>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<REMOTE_CLUSTER_CA_CERTIFICATE>
-----END CERTIFICATE-----
Central Cluster Configuration¶
Node Roles¶
Add the remote_cluster_client role to all master nodes in the central cluster:
elasticsearch:
instances:
master-1:
elastic:
node.roles:
- remote_cluster_client
The complete node configuration should include:
node.name: elasticsearch-master-1
node.roles: master,ingest,remote_cluster_client
transport.port: 9400
xpack.security.enabled: true
xpack.security.transport.ssl.certificate: certs/certificate.pem
xpack.security.transport.ssl.certificate_authorities: certs/ca.pem
xpack.security.transport.ssl.enabled: true
Certificates¶
From the remote cluster, copy the content of /opt/site/elasticsearch-master-1/certs/ca.pem and append it to xpack.security.transport.ssl.certificate_authorities in the central cluster's model configuration.
The result should contain both certificates:
xpack.security.transport.ssl.certificate_authorities:
- |
-----BEGIN CERTIFICATE-----
<REMOTE_CLUSTER_CA_CERTIFICATE>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<CENTRAL_CLUSTER_CA_CERTIFICATE>
-----END CERTIFICATE-----
Enabling Remote Cluster Connection¶
After configuring both clusters, enable the remote cluster connection from the central cluster.
Using Kibana Dev Tools¶
- Navigate to Kibana and open Dev Tools
- Run the following query to register the remote cluster:
PUT /_cluster/settings
{
"persistent" : {
"cluster" : {
"remote" : {
"<remote-cluster-name>" : {
"mode": "proxy",
"proxy_address": "<proxy-ip>:<proxy-port>"
}
}
}
}
}
Replace:
- <remote-cluster-name>: Must match the tenant name (e.g., if tenant is customer1, use customer1). This is required for BS Query data sources to work correctly with the pattern {{tenant}}:lmio-{{tenant}}-events*
- <proxy-ip>: The IP address of the proxy server
- <proxy-port>: The port the proxy is listening on (e.g., 15003)
Remote Cluster Name Must Match Tenant Name
The remote cluster name must exactly match the tenant name for BS Query data sources to function correctly. When you configure a data source with the pattern {{tenant}}:lmio-{{tenant}}-events*, the remote cluster name is used to resolve the {{tenant}} variable.
Multiple Remote Clusters¶
If you need to connect to multiple remote clusters, specify all of them in a single request. Each remote cluster name must match its corresponding tenant name:
PUT /_cluster/settings
{
"persistent" : {
"cluster" : {
"remote" : {
"tenant1" : {
"mode": "proxy",
"proxy_address": "10.203.100.130:15003"
},
"tenant2" : {
"mode": "proxy",
"proxy_address": "10.52.23.51:15003"
},
"tenant3" : {
"mode": "proxy",
"proxy_address": "10.14.142.21:15003"
}
}
}
}
}
In this example, tenant1, tenant2, and tenant3 are both the remote cluster names and the tenant names. This allows BS Query to correctly resolve data source patterns like {{tenant}}:lmio-{{tenant}}-events*.
All Clusters Must Be Present
When configuring multiple remote clusters, all clusters must be specified in a single request. If you add clusters incrementally, make sure to include all previously configured clusters in each update.
Remote Cluster Name = Tenant Name
Each remote cluster name must exactly match its corresponding tenant name. This ensures that BS Query can correctly query data using patterns like {{tenant}}:lmio-{{tenant}}-events*.
Verifying Connection Status¶
After enabling the remote cluster, verify the connection status:
GET /_remote/info
Expected Response¶
A successful connection should return:
{
"<remote-cluster-name>" : {
"connected" : true,
"mode" : "proxy",
"proxy_address" : "<proxy-ip>:<proxy-port>",
"server_name" : "",
"num_proxy_sockets_connected" : 18,
"max_proxy_socket_connections" : 18,
"initial_connect_timeout" : "30s",
"skip_unavailable" : false
}
}
Key indicators of a successful connection:
"connected" : true- The cluster is connected"num_proxy_sockets_connected"- Number of active connections (should matchmax_proxy_socket_connections)"mode" : "proxy"- Connection is using proxy mode
Troubleshooting Connection Issues¶
If the connection shows "connected" : false:
- Check network connectivity: Verify that the central cluster can reach the proxy IP and port
- Check certificates: Verify that both clusters have each other's CA certificates properly configured
- Review Elasticsearch logs: Check logs on both clusters for connection errors
- Verify node roles: Ensure all master nodes have the
remote_cluster_clientrole
Configuring BS Query Data Source¶
After the remote cluster is connected, configure BS Query to access data from the remote cluster.
Creating Elasticsearch Data Source¶
In the Library, create or edit an Elasticsearch data source configuration file in the /DataSources/ folder:
define:
type: datasource/elasticsearch
specification:
index: "{{tenant}}:lmio-{{tenant}}-events*"
This configuration:
- Defines an Elasticsearch data source type
- Specifies the index pattern using tenant variables
- Allows BS Query to query data from the remote cluster using the pattern
{tenant}:lmio-{tenant}-events*
Using Remote Cluster in Queries¶
When querying data, you can specify the remote cluster name (which matches the tenant name) in the index pattern:
<tenant-name>:<index-pattern>
For example, if the tenant name is customer1 and the remote cluster is also named customer1, you can query events using:
customer1:customer1:lmio-customer1-events*
Since the remote cluster name matches the tenant name, BS Query will automatically resolve the {{tenant}} variable in data source patterns like {{tenant}}:lmio-{{tenant}}-events* to the correct remote cluster.
Best Practices¶
-
Remote Cluster Naming: Always use the tenant name as the remote cluster name. This ensures compatibility with BS Query data sources and prevents configuration errors.
-
Certificate Management: Keep CA certificates synchronized between clusters. If certificates are rotated, update both clusters.
-
Proxy Configuration: Use dedicated proxy services for remote cluster connections to avoid conflicts with other services.
-
Network Security: Ensure that proxy ports are properly secured and only accessible from authorized networks.
-
Monitoring: Regularly check the connection status using
GET /_remote/infoto ensure clusters remain connected. -
Testing: Test the connection with simple queries before deploying to production. Verify that BS Query can access data using the tenant-based index patterns.
Troubleshooting¶
Connection Timeout¶
If connections timeout:
- Verify network connectivity between clusters
- Check firewall rules allow traffic on proxy ports
- Ensure proxy services are running and accessible
- Review Elasticsearch transport layer logs
Certificate Errors¶
If you see certificate validation errors:
- Verify CA certificates are correctly appended (not replaced) in both clusters
- Ensure certificates are in PEM format with proper BEGIN/END markers
- Check that SSL/TLS is enabled on both clusters
- Verify certificate paths in the configuration