Behind the scenes of an ELK system
Behind every security measure you take, you should have an information management system helping you take decisions. If you work with security, you need a way to collect, process, save and analyze huge amounts of data that should be used to control how your systems are behaving, find anomalies and evaluate the results of your actions.
Have you ever wondered how to manage billions of logs and metrics from thousands of devices in your infrastructure? If you need high-availability and a resilient and stable system to process your data this is the tutorial for you.
Based on the experience obtained in the past 4 years at the University of Oslo processing billions of logs a day from more than 15000 devices, this tutorial will give some inside information and many tips about how to achieve this with Linux and open source software.
You will learn how to put together HAProxy, agents, Logstash, Elasticsearch and RabbitMQ to work at scale. You will also hear about the problems and pitfalls we have experienced during these years and what we learned from them.
The tutorial format is a lecture with time for questions and short discussions as we move along. Due to the nature of the subject treated and time available, no hands-on or practical exercises will be done during the lecture. The tutorial should provide background, essential theory and the details needed to enable you to work further on your own afterwards.