Sergey Nivens - Fotolia
How to manage system logs using the ELK stack tool
Centrally managing system logs is an important practice for enterprise security. Expert Dejan Lukan explains how to set up cloud servers, such as ELK stack, for this purpose.
A previous article provides an overview about how to set up the environment for open source monitoring of device logs across a wide range of devices. From there, the next step is to set up the cloud servers to send system logs to the central ELK server, where the logs will be gathered and analyzed for new knowledge about the systems.
Applications and daemons used to write log messages to .log files to inform the system administrator about a problem or an action. The main problem was that log format wasn't standardized, which resulted in inefficient, unmanageable log parsing and analysis. When sys log came into the picture, many of the problems were resolved, but security was still ignored and the system logs were still not being centrally managed.
Nowadays, there are a number of cloud services that centrally manage system logs on separate systems. An attacker with access to the cloud system can tamper with the system logs to hide his presence. To prevent that, a centrally managed cloud log server can be used where all logs from every other cloud server are analyzed for possible malicious actions.
For this example, an ELK stack will provide an environment for central management and analysis of system logs. The ELK stack is comprised of three components: namely, Elasticsearch for log storage and indexing, Logstash for log parsing, and Kibana as an interface for searching and analysis of the system logs.
The environment
The environment can be set up by using a Docker container, which is provided by Protean Security for this example. Use the Docker run command to download the proteansec/elk image and forward a number of ports. Logstash listens on TCP/UDP port 5514, which accepts and parses the sys log messages. Elasticsearch listens on ports 9200/9300, but the ports don't have to be exposed to the outside world, since Elasticsearch is only directly used by Logstash and Kibana. Kibana uses the TCP port 5601 to expose a Web interface, which can be used to search and analyze the logs.
Companion article
See Infosec Institute's accompanying article on Incorporating Cloud Security Log into Open-Source Cloud Monitoring Solutions.
After establishing the environment, connect to the Kibana Web interface at the IP address of the host where the Docker image is running on port 5601 to search and analyze all the gathered logs.
To gather logs in a centrally manageable Docker container, configure every cloud server to send the logs to the exposed 5514 port. This can be done by installing rsyslog on a cloud server and adding another configuration file to directory rsyslog.d, stating that every message from every sys log subsystem should be sent to a Docker container on port 5514. You need to create another file -- /etc/rsyslog.d/10-logstash.conf -- with the following contents: "*.* @@docker:5514," where Docker is a DNS-resolvable host name of the container.
If a number of cloud systems are accessible over SSH, it's possible to automate this process with a simple fabric Python library. This enables running commands on remote systems by sending the commands over SSH to automate deployment and administration tasks.
Detection of important events
The OpenVPN application generates logs upon successful user authentication. During Penetration testing and security assessments, we've stumbled upon various certificates and credentials used to authenticate to the VPN server. By obtaining all the necessary files, an attacker can establish a VPN connection to the company network and have access to internal resources, which is especially dangerous. In order to detect such an action, set up an ELK stack to monitor VPN connections to the server to identify the user, as well as the IP the user is connecting from. When a user connects from an unknown IP in some distant country, such an event can be detected and reported to a security professional, who can verify whether it is malicious or not.
The OpenVPN server generates the following event when a user authenticates to the server successfully:
[plain]
<29>Aug 21 15:01:32 openvpn[9739]: 1.2.3.4:39236 [name.surname] Peer Connection Initiated with [AF_INET]1.2.3.4:39236
[/plain]
The log message is already pushed to the ELK stack where Logstash parses it, but it doesn't yet have the appropriate rule to extract the username and IP from it. To write an appropriate rule, use the Grok Debugger service, which accepts the actual system log, as well as the rule, and parses the message according to the rule.
An appropriate rule will parse the message as presented below, where the username is stored in the "vpnuser" variable and the host in the "vpnhost" variable:
In the Kibana interface, the parsed message containing the field's vpnhost -- the IP address of the remote user -- vpnport, or the port of the remote user, and vpnuser -- the ID of the authenticated user -- is now visible, as shown here:
To add the geographical information to the message, enable GeoIP in Logstash. This will automatically add latitude and longitude to every message containing the vpnhost IP address. A number of additional geographical information fields will be added to appropriate messages, identifying the country where the VPN session has been established.
It's easy to add a new map to show all IP address from where the VPN users have been connecting and present the information in Kibana. The IP originating from Slovenia is shown on the picture below, where additional established VPN session IPs would've been easily distinguishable:
Centrally controlled log management system a 'must' for enterprise security
At any given time, every system generates a number of events to reveal some information about the actions it's currently undertaking. System logs are used to communicate information to the system administrator about certain actions.
It's important to properly handle the OpenVPN sys log message generated at the time of user authentication. Appropriate Logstash rules need to be written to parse the message in order to extract the sensitive information -- such as the username, as well as the user IP address -- from where the user is connecting. The IP address can be further used to detect the origin country from where the VPN connection was established.
If an employee is in a different country at the time of the event, a security alert should be triggered to notify a security professional about a possible breach. Such an event can occur in two use cases. In the first use case, the user has connected to a VPN from a conference or while on vacation, which shouldn't be treated as a security incident. The second use case involves a hacker obtaining the users private key and credentials to connect to the enterprise VPN server, which could be devastating to the company as it is hard to detect, but it is possible with the presented tools and techniques.
Having a centrally controlled log management system is a must for every modern enterprise in order to keep all system logs in one place. This is mandatory to detect possible anomalies and threats that would otherwise be left unnoticed for weeks or even months.
About the author:
Dejan Lukan has an extensive knowledge of Linux/BSD system maintenance, as well as security-related concepts including system administration, network administration, security auditing, penetration testing, reverse engineering, malware analysis, fuzzing, debugging and antivirus evasion. He is also fluent in more than a dozen programming languages, and regularly writes security-related articles for his own website.