Pricing Login
Pricing

DevOps and Security Glossary Terms

Glossary Terms
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Log analysis - definition & overview

In this article
What is log analysis?
What is a log analyzer?
How do you analyze logs?
Log analysis functions and methods
How to perform log analysis
Ensuring effective log analysis with log analyzers
Centralized log collection & analysis
Improved root cause analysis
Log data is big data
Sumo Logic aggregates and analyzes log files from the cloud
What is log analysis?
What is a log analyzer?
How do you analyze logs?
Log analysis functions and methods
How to perform log analysis
Ensuring effective log analysis with log analyzers
Centralized log collection & analysis
Improved root cause analysis
Log data is big data
Sumo Logic aggregates and analyzes log files from the cloud

What is log analysis?

Log analysis is the process of reviewing, interpreting and understanding computer-generated records called logs.

Key takeaways

  • Log analysis functions manipulate data to help users organize and extract information from the logs.
  • Organizations that effectively monitor their cyber security with log analysis can make their network assets more difficult to attack.
  • Log analysis is a crucial activity for server administrators who value a proactive approach to IT.
  • With Sumo Logic's cloud-native platform, organizations and DevOps teams can aggregate and centralize event logs from applications and their infrastructure components throughout private, public and hybrid cloud environments.

What is a log analyzer?

Log analysis tools that are leveraged to collect, parse, and analyze the data written to log files. Log analyzers provide functionality that helps developers and operations personnel monitor their applications as well as visualize log data in formats that help contextualize the data. This, in turn, enables the development team to gain insight into issues within their applications and identify opportunities for improvement. When referencing a log analyzer, we’re referring to software designed for use in log management and log analysis.

Log analysis offers many benefits, but these benefits cannot be realized if the processes for log management and log file analysis are not optimized for the task. Development teams can achieve this level of optimization through the use of log analyzers.

How do you analyze logs?

One of the traditional ways to analyze logs was to export the files and open them in Microsoft Excel. This time-consuming process has been abandoned, as tools like Sumo Logic have entered the market. With Sumo Logic, you can integrate with several different environments using IIS web servers, NGINX, and others. With free trials available to test out their log analysis tooling at no risk, the time has never been better to see how log analyzers can help improve your strategies for log analysis and the processes described above.

Log analysis functions and methods

Log analysis functions manipulate data to help users organize and extract information from the logs. Here are just a few of the most common methodologies for log analysis.

Normalization
Normalization is a data management technique wherein parts of a message are converted to the same format. The process of centralizing and indexing log data should include a normalization step where attributes from log entries across applications are standardized and expressed in the same format.

Pattern recognition
Machine learning applications can now be implemented with log analysis software to compare incoming messages with a pattern book and distinguish between "interesting" and "uninteresting" log messages. Such a system might discard routine log entries, but send an alert when an abnormal entry is detected.

Classification and tagging
As part of our log analysis, we may want to group log entries that are of the same type. We may want to track all of the errors of a certain type across applications, or we may want to filter the data in different ways.

Correlation analysis
When an event happens, it is likely to be reflected in logs from several different sources. Correlation analysis is the analytical process of gathering log information from a variety of systems and discovering the log entries from each system that connects to the known event.

How to perform log analysis

Logs provide visibility into the health and performance of an application and infrastructure stack, enabling developer teams and system administrators to easily diagnose and rectify issues. Here's our basic five-step process for managing logs with log analysis software:

  1. Instrument and collect - install a collector to collect data from any part of your stack. Log files may be streamed to a log collector through an active network, or they may be stored in files for later review.

  2. Centralize and index - integrate data from all log sources into a centralized platform to streamline the search and analysis process. Indexing makes logs searchable, so security and IT personnel can quickly find the information they need.

  3. Search and analyze - Analysis techniques such as pattern recognition, normalization, tagging, and correlation analysis can be implemented either manually or using native machine learning.

  4. Monitor and alert - With machine learning and analytics, IT organizations can implement real-time, automated log monitoring that generates alerts when certain conditions are met. Automation can enable the continuous monitoring of large volumes of logs that cover a variety of systems and applications.

  5. Report and dashboard - Streamlined reports and dashboarding are key features of log analysis software. Customized reusable dashboards can also be used to ensure that access to confidential security logs and metrics is provided to employees on a need-to-know basis.

Ensuring effective log analysis with log analyzers

Effective log analysis requires the use of modern log analysis concepts, tooling, and practices. The following tactics can increase the effectiveness of an organization’s log analysis strategy, simplify the process for incident response, and improve application quality.

Real-time log analysis

Real-time log analysis refers to the process of collecting and aggregating log event information in a manner that is readable by humans, thereby providing insight into an application in real time. With the assistance of a log aggregator and analysis software, a DevOps team will have several distinct advantages when their logs are analyzed in this way.

When log analysis is performed in real-time, development teams are alerted to potential problems within their applications at the earliest possible moment. This enables them to be as proactive as possible, thereby limiting the impact that an incident has on the end users. The types of incidents that previously went unreported and undetected by the DevOps team will now have the team’s attention in a matter of minutes. This provides the necessary framework for increasing application availability and reliability.

In addition to notifying the development team of application issues nearly instantly, real-time log file analysis provides developers with critical context that enables them to resolve incidents quickly and completely. This limits the amount of downtime experienced by the customer while also adding to the likelihood that the issue will be thoroughly resolved.

Log analysis in cyber security

Organizations that wish to enhance their capabilities in cyber security must develop capabilities in log analysis that can help them actively identify and respond to cyber threats. Organizations that effectively monitor their cyber security with log analysis can make their network assets more difficult to attack. Cyber security monitoring can also reduce the frequency and severity of cyber-attacks, promote earlier response to threats and help organizations meet compliance requirements for cyber security, including:

  • ISO/IEC 27002:2013 Information technology — Security techniques — Code of practice for information security controls

  • PCI DSS V3.1 (Parts 10 and 11)

  • NIST 800-137 Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations

The first step to an effective cyber security monitoring program is to identify business applications and technical infrastructure where event logging should be enabled. Use this list as a starting point for determining what types of logs your organization should be monitoring:

  • System logs
    • System activity logs

    • Endpoint logs

    • Application logs

    • Authentication logs

    • Physical security logs

  • Networking logs
    • Email logs

    • Firewall logs

    • VPN logs

    • Netflow logs

  • Technical logs
    • HTTP proxy logs

    • DNS, DHCP and FTP logs

    • AppFlow logs

    • Web and SQL server logs

  • Cyber security monitoring logs
    • Malware protection software logs

    • Network intrusion detection system (NIDS) logs

    • Network intrusion prevention system (NIPS) logs

    • Data loss protection (DLP) logs

Event logging for all of these systems and applications can generate a high volume of data, with significant expense and resources required to handle logs effectively. Cyber security experts should determine the most important logs for consistent monitoring and leverage automated or software-based log analysis methods to save time and resources.

Log analysis in Linux

The Linux operating system offers several unique features that make it popular among its dedicated user base. In addition to being free to use, thanks to an open-source development model with a large and supportive community, Linux automatically generates and saves log files that make it easy for server administrators to monitor important events that take place on the server, in the kernel, or any of the active services or applications.

Log analysis is a crucial activity for server administrators who value a proactive approach to IT. By tracking and monitoring Linux log files, administrators can keep tabs on server performance, discover errors, detect potential threats to security and privacy issues and even anticipate future problems before they ever occur. Linux keeps four types of logs that system administrators can review and analyze:

  • Application logs - Linux creates log files that track the behavior of several applications. Application logs contain records of events, errors, warnings, and other messages that come from applications.

  • Event logs - the purpose of an event log is to record events that take place during the execution of a system. Event logs provide an audit trail, enabling system administrators to understand how the system is behaving and diagnose potential problems.

  • Service logs - The Linux OS creates a log file called /var/log/daemon.log which tracks important background services that have no graphical output. Logging is especially useful for services that lack a user interface, as there are few other methods for users to check the activities and performance of the service.

  • System logs - System log files contain events that are logged by the operating system components. This includes things like device changes, events, updates to device drivers and other operations. In Linux, the file /var/log/Syslog contains most of the typical system activity logs. Users can analyze these logs to discover things like non-kernel boot errors, system start-up messages, and application errors.

Centralized log collection & analysis

Log events are generated all the time in any application built with visibility and observability in mind. As end users utilize the application, they are creating log events that need to be captured and evaluated for the DevOps team to understand how their application is being used and the state that it’s in.

To illustrate this point, imagine that you have a web app. As users navigate the app, log events are generated with each page request. Request data can provide meaningful insights, but the painstaking and tedious process of combing through massive log files on individual web servers would be too much for human beings to handle productively. Instead, these log events should be consumed by a log analyzer that centralizes all log data for all instances of the application. This enables human beings to digest the log data more efficiently and completely, allowing team members to readily evaluate the overall health of the application at any given time.

Glancing at individual requests on a single web server may not provide much insight into how the application as a whole is performing. But when thousands of requests are aggregated and utilized to create visualizations, you get a much clearer picture for evaluating the state of the application. For example, are a significant number of requests resulting in 404s? Are requests to pages that have historically responded in a reasonable time frame experiencing latency? Centralized log collection and analysis allow you to answer these questions.

In addition, it’s important to know that the analysis of log events isn’t just useful for responding to incidents that are detrimental to the health of the application. It can also help organizations keep tabs on how customers are interacting with their applications. For example, you can track which sources refer to the most users and which browsers and devices are used most frequently. This information can help organizations fine-tune their applications to help provide end users with the greatest value and user experience moving forward. It is much easier to gather this information when log data is contextualized through centralized log collections and intuitive visualizations – and the easiest way to do this is to use log analysis tools such as the one provided by Sumo Logic.

Improved root cause analysis

The increased visibility provided by log analyzers allows DevOps folks to get to the root cause of application problems in the shortest time frame possible.

In the context of application troubleshooting, root cause analysis refers to the process of identifying the central cause of an application issue during incident response. When dealing with application issues of any complexity, log files are almost always a focal point. But, as is often the case, raw logs also contain a plethora of information that has no relevance to the issue at hand. This sort of information (or noise) in log files can make it difficult to isolate information related to a particular incident.

In the realm of root cause analysis, log analyzers provide critical tooling designed to empower development and operations personnel to sift through the noise and dig into the relevant data. This includes:

  • Alerts notify the correct staff of an issue at the earliest possible moment in time. In addition to leading to a faster resolution simply by starting the process of analysis sooner, alerting often helps incident response personnel connect the dots between the problem and its cause by providing an exact time frame for when the issue surfaced.

  • Visualizations represent log entries in a manner that provides context for the data being collected. In the process of root cause analysis, it is not uncommon for an alarming trend to accompany the incident. Visualizations that depict such trends can prove extremely useful in helping staff develop hypotheses that bring them closer to identifying the root cause of the problem.

  • Search and filter functionality for centralized log data help reduce the time it takes to isolate instances of a particular incident to begin deciphering its underlying cause.

Log data is big data

The single biggest data set that IT can use for monitoring, planning, and optimization is log data. After all, logs are what the IT infrastructure generates while it is going about its business. Log data is generally the most detailed data available for analyzing the state of the business systems, whether it be for operations, application management, or security. Best of all, the log data is being generated whether it is being collected or not. But to use it, some non-trivial additional infrastructure has to be put in place. And with that still, first-generation log management tools did run into problems scaling to the required amount of data, even before the data explosion we have seen over the last couple of years took off.

Log data does not fall into the convenient schemas required by relational databases. Log data is, at its core, unstructured, or semi-structured, leading to a deafening cacophony of formats; the sheer variety in which logs are being generated presents a major problem in how they are analyzed. The emergence of Big Data has not only been driven by the increasing amount of unstructured data to be processed in near real-time, but also by the availability of new toolsets to deal with these challenges.

Classic relational data management solutions simply are not built for this data, as every single legacy vendor in the SIEM and log management category has painfully experienced. Web-scale properties such as Google, Yahoo, Amazon, LinkedIn, Facebook and many others have faced the challenges embodied in the 3Vs first. At the same time, some of these companies have decided to turn what they learned in building large-scale infrastructures to run their own business into strategic product assets themselves. The need to solve planetary-scale problems has led to the invention of Big Data tools, such as Hadoop, Cassandra, HBase, Hive, and the lot. And so today it is possible to leverage offerings such as Amazon AWS in combination with the aforementioned Big Data tools to build platforms that can address the challenges – and opportunities – of Big Data head-on and without requiring a broader IT footprint.

Sumo Logic aggregates and analyzes log files from the cloud

With Sumo Logic's cloud-native platform, organizations and DevOps teams can aggregate and centralize event logs from applications and their infrastructure components throughout private, public and hybrid cloud environments. With our robust log analytics capabilities powered by artificial intelligence, organizations can turn their machine data into actionable insights that drive security, business, and operational performance.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.