Introduction to event lanes¶
Data flow through TeskaLabs LogMan.io¶
The following example illustrates the standard flow of logs (events) through TeskaLabs LogMan.io.
- Collecting the raw events: Logs (events) are collected by LogMan.io Collector and sent to the central LogMan.io cluster.
- Archiving the raw events: LogMan.io Receiver stores raw events in Archive. Archive is an immutable database for long-term storage of the incoming logs. Raw logs can be retrieved from there and used for further analysis.
- Parsing the events: Raw logs are consumed from Archive and sent for parsing. First, they arrive in the Kafka
received
topic. LogMan.io Parsec consumes raw logs from thereceived
Kafka topic and applies selected parsing rules.- Successfully parsed events continue to the Kafka
events
topic. LogMan.io Depositor consumes parsed events from Kafka and stores them in the Elasticsearchevents
index. - When parsing fails, events become unparsed and continue to the Kafka
others
topic. LogMan.io Depositor consumes unparsed events from Kafka and stores them in the Elasticsearchothers
index.
- Successfully parsed events continue to the Kafka
Event lanes¶
When a new log source is connected and a new data stream is assigned to it, TeskaLabs LogMan.io automatically creates a new event lane for it. An event lane describes:
- what parsing rules will be applied to the data stream
- what dashboards, reports, and other content in the Library will be enabled for the tenant that owns the data stream
- classification of the data stream (vendor, product, category, etc.)
- a data source used in the Discover screen
- what Kafka topics and Elasticsearch indices will be used for that data stream
Every event lane belongs to a single tenant only. Two tenants cannot share the same event lane.