Back to blog results

7월 2, 2020 By Scott Fitzpatrick

Improving Application Quality through Log Analysis

Throughout the history of software development, one statement has remained true: no application is perfect. Due to that fact, development organizations must work with all resources at their disposal to limit the impact that application problems have on the end-user.

Server log files represent an important resource that should be referred to during the process for troubleshooting any application issue. When utilized properly, these log files can prove to be invaluable – providing insight that can lead to the prompt and permanent resolution of the problem at hand. Below I will talk about the various log files and events that can be analyzed to improve application quality. Additionally, I will detail how a log management platform can simplify the process for log analysis. This, in turn, enables development teams to identify the root cause of a diverse range of application issues and promotes a culture of continuous improvement whereby the quality of the application rises over time.

Web Server Logging

The various server log files (and their locations) that exist for troubleshooting web application problems are dependent upon the HTTP server on which the application runs. With that said, there are industry standards that drive log format and the information being logged.

Many web servers, including Nginx and Apache (two of the more popular HTTP servers available), produce both access and error logs that store information that can prove useful in issue identification and resolution.

  • Access Logs - With each request made to a particular web server, a log event is recorded in the access log with information regarding that request. The information recorded (and even the format in which it is recorded) is often very similar or even the same, no matter the HTTP server being leveraged. Each log event will typically contain properties such as the IP address of the client making the request, the time at which the request was made, the HTTP status code returned by the request, request type, the resource being requested, amongst others. Additionally, with minor changes to the logging configuration, additional properties such as response time can be recorded.
  • Error Logs - Another common server log file is the error log. This is where the server will send information regarding any errors encountered when processing requests. Depending upon the configuration being leveraged, this log file will record an indicator of issue severity, the date and time of the request, a message providing some detail as to why the failure occurred, the client IP address making the request and some additional properties. This log file is of particularly great value when attempting to determine the root cause of the error at hand.

Using Log Files to Gain Insight Into Application Issues

The quality of an application is, in large part, measured by its ability to perform the functions it was designed to perform. Additionally, it must do so in a reasonably efficient manner. With that said, measuring and maintaining a high-level of application quality requires a commitment by the DevOps team to identify and resolve application issues as they are introduced into the codebase and, on top of that, they also need to identify opportunities where the application can be improved (think performance). This is where log analysis can help.

Consider the following scenarios that depict how effective log analysis can help identify opportunities for a development team to bolster application quality.

Identifying Sources of Application Slowness

Request latency can be very detrimental to application quality. And the consequences of latency issues can be far-reaching. For instance, application slowness can quickly ruin the user experience, frustrating end-users and (in some cases) driving them to a competitor's product.

Through the analysis of server access logs, such application slowness can be detected, providing development organizations with the ability to identify opportunities for improving application performance. Imagine for a moment that requests to load a specific resource are taking five times as long as the average request elsewhere in the application. This may not be enough to trigger an influx of support tickets, as the application is still doing what it’s supposed to do. But it may be enough to drive end-users to other products that provide the same functionality in a more efficient manner.

Analysis of request times within an application’s access logs can ensure that the development team is made aware that this issue exists. And once made aware, they can work to discover the root cause of the slowness – whether it be a long-running SQL query or inefficient UI design – and provide a permanent fix.

Detecting Errors Within a Web Application

The only threat to application quality that is more obvious than latency are actual errors that prevent an application from performing the function for which it was designed. As in the case of latency issues, log analysis can help development teams find these problems quickly. In fact, with the use of log management software (more on this later), log analysis could possibly identify these problems before end-users even have the opportunity to report them.

As we know, both error logs and access logs can indicate quality issues within an application. For instance, persistent and recurring responses from server access logs with a 404 HTTP status code attached to it may indicate resources that used to exist but are no longer available. As a result, it’s possible there now exist outdated links within the application that require removal.

Additionally, error log events are recorded with what is known as a “log level” to indicate the severity of the event being recorded. Repetitive reporting of events with a critical level of severity is a good indicator of a problem that needs to be addressed immediately by the development staff.

If a development organization neglects to keep tabs on their server logs, they will likely miss key indicators representing issues within their application. That being said, few organizations can afford to task personnel with blindly scanning and searching log files in hopes of identifying potential threats to application quality. That would be wildly inefficient. Instead, an organization must leverage log management software to ensure they stay on top of application quality in an efficient manner.

Simplifying the Process With a Log Management Platform

Simply put, tooling for log management and analysis (such as that from Sumo Logic) greatly simplify the process for utilizing log files to identify and resolve problems within a system that are detrimental to application quality.

Sumo Logic’s platform enables DevOps teams to analyze their logs with the use of filtering and visualizations that provide context to the data. This functionality serves to help organizations identify trends that indicate issues with application performance and to quickly identify the source of errors within an application.

When a development team can reduce the amount of time it takes to discover an issue within an application, they reduce the amount of time it takes to perform root cause analysis and provide a permanent resolution – ensuring that, at all times, their fingers remain on the pulse of the quality of their application.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial
Scott Fitzpatrick

Scott Fitzpatrick

Scott Fitzpatrick is a Fixate IO Contributor and has nearly 8 years of experience in software development. He has worked with many languages and frameworks, including Java, ColdFusion, HTML/CSS, JavaScript and SQL.

More posts by Scott Fitzpatrick.

People who read this also enjoyed