Maxim_Kazmin - Fotolia

Tip

Manage application log files with the right tools and guidelines

IT operations teams face costly and complicated steps to monitor the crucial data in log files. Learn to use log management and aggregation tools to interpret trends without waste.

Application log files, while technical in nature, require a bit of creative interpretation to navigate successfully. They are never as clear as you want them to be.

All applications and OSes generate logs -- so many logs, that it's easy to drown in a sea of them. And application log files can be too cryptic to help operations teams perform maintenance or support tasks. For example, a general application fault alert tells an IT administrator something is broken, but that vague notification is a troubleshooting nightmare. Thus, every distinct piece of an application produces its own specific log data. Then, the challenge is for the troubleshooter to put it all together and form a complete picture of the application's internal health.

Determine the nature of the log

A common misstep with log management is to assume every error or warning is a catastrophe that will crash the system. Application log files can be any combination of status, general information and issues. And even the issues aren't necessarily showstoppers. The modern application consists of hundreds of subprograms, which can -- and will -- fail.

Most applications are able to restart failed elements without end users ever knowing. This self-healing ability keeps the modern application running. Admins must be able to understand when an event captured by an application log file is a mere quirk resolved by an application component restart or a sign of a bigger issue. Most of this information comes from trending and historical data. While surprise acute issues and crashes happen, often, application log files provide indications of dangers coming. Performance trends and red flag issue alerts leading up to a major event are the early warning signs. The challenge is to determine what constitutes a major event versus a minor one. This judgment is guided by experience with those trends, as well evaluation of the application environment as a whole -- not just the piece that failed. Trends are the key to early warning sign detection because patterns are almost always repeatable.

Log resource requirements

While the systems admin needs experience to combat application downtime, the system needs a log management or aggregation tool, such as Loggly or Splunk, that uses machine learning to circumvent the long learning curve on staff experience -- especially because admins take their experience with them when they leave.

These tools look at every possible record and use machine learning functions to deduce trends and predict upcoming issues. Tools still need human involvement, but they provide a cohesive picture of the application environment's status and activity because they can lock onto patterns quickly. Log aggregation tools don't require the same amount of time or staff effort; they require money.

Logging tools and software are generally purchased as cloud services and subscription models. Machine learning on big data, which is what logs are by sheer volume, requires a lot of resources for accuracy and efficiency. Unless your organization plans to perform log analysis 24/7, these processes are a poor use of on-premises resources and should be handled by the log tool vendor.

How often should an organization analyze log data? Log monitoring isn't usually a venue for continual monitoring -- unless your organization has unlimited compute and storage resources or money. It hinges on a balance between time and resources. Wait too long and you won't detect an issue trend until it's too late. Annual or quarterly reports provide too little data, but weekly log analysis might be a drain to execute. Monthly or biweekly reports are often sweet spots between cost and responsiveness.

Log data monitoring isn't the same as monitoring IT equipment or user experience. Admins must collect data points, but daily collection won't yield better log aggregation and analysis than more spaced-out collections. Data must be relevant but not excessive so that it does not get too expensive and resource-intensive to actually use.

Logs are valuable indicators of application behavioral patterns that cause outages and performance issues. They are road signs along your organization's IT infrastructure path that can prevent traveling in circles.

Next Steps

6 articles to modernize a log management strategy

How to approach IT logging in the cloud vs. on premises

Dig Deeper on IT systems management and monitoring