Application Log Management Best Practices for Real Time Analytics

Ranjith Kumar
4 min readDec 7, 2017

--

Why is Logging Significant?

Application logging is something that is overlooked most of the times. Logs could be handy in not just debugging, but to actually analyze the customer and application behaviour, and to detect any security breach.

Logentries offers an easy, real-time approach to monitoring and accessing valuable application usage data, with the option to drill into specific events or actions to uncover performance issues and usage trends such a popular features, UI activity and user engagement.

What to Log?

Log entry should answer following questions:
1. Who

2. When

3. Where

4. What

5. Result

We may log events like,

  • Start, stop and restart.
  • Some security-related events may be logged such as unauthorized URL access attempts and user logins
  • executed SQLs may be logged
  • User HTTP requests

Best Practices:

  • Log file size and log writing frequency should be small. If applications begin to log a lot, application performance may severely fall down.
  • Collect events from everything, everywhere:
  • Collect all the data you can, because the more data you capture, the more visibility you have. For example, collect these when you can:
  • Application logs
  • Database logs
  • Network logs
  • Configuration files
  • Performance data (iostat, vmstat, ps, etc.)
  • Anything that has a time component
  • Separate logging files for example

Log::info we shall use a file called laravel_info.log

Log::error we shall use a file called laravel_error.log etc.

  • Messages has to be logged with appropriate context.
Log::emergency($error);
Log::alert($error);
Log::critical($error);
Log::error($error);
Log::warning($error);
Log::notice($error);
Log::info($error);
Log::debug($error);
Example: Log::info('User failed to login.', ['id' => $user->id]);

Let us take a look at the below logs and see improvement areas. Objective is to convey all vital information with

  • Usage of Unique Ids: “xxxxx- uat_server [2017-06-06 12:25:35] lumen.INFO: Status to change => {"yyyyy_id":"021"}” Always log a Unique ID either transaction Id or userid for any context. For example, logging userid along with the event like status change will be more informative. Supplmenting the above log with user id will help in analysing user behaviour, something like “xxxxx- uat_server [2017-06-06 12:25:35] lumen.INFO: zzzz_id:564732 Status to change => {"yyyyy_id":"021"}”
  • Meaningful Context: xxxxx uat_server [2017-06-06 12:25:35] lumen.INFO: Status changing => {"yyyyy_id":706} Kept it Short and Simple. “Status Changing” is confusing. “Stauts Changed” after the process completion is more understandable.
  • Source of the Event: Include the source of the log event, such as application name, funcationality(DIY, CIS) platform (Web, Mobile) and environment(HostName) Above log doesn’t reveal where is the status change happening.
  • Minimal Multi-line Events: Multi-line events generate a lot of segments, which can affect indexing and search speed, as well as disk compression. Consider breaking multi-line events into separate events.
  • Log to local files: If you log to a local file, it provides a local buffer and you aren't blocked if the network goes down.
  • Log rotation policies: Logs can take up a lot of space. Maybe compliance regulations require you to keep years of archival storage, but you don't want to fill up your file system on your production machines. So, set up good rotation strategies and decide whether to destroy or back up your logs.
Technology Stack:
Out of various log management solutions like Splunk, ELK, loggly, logentries, logz.io, logmatic, fluentd, sentry we have chosen a customized Stack taking best out of different components.

1. Separate log files generated in each application will be staged in the respective instances locally.

2. FluentD configured will collect log files (delta) at specified time intervals and push the logs to s3 as .txt

3. Logstash collects the s3 file as soon as it is archived, parse the data for elasticsearch indexing.

4. Elastic Search indexes the mappings.

5. Kibana visualizes the application monitoring dashboards inlucding user behaviour analysis.

Why FluentD with ELK!?

FluentD is capable of collecting logs from different sources and push to different targets which will give us the luxury of leveraging the logs to the max for analysis without impacting the transactional database and applications.

How to Leverage Log Data without impacting application performance?

  • Log directly from the user’s browser to track user activity and to perform Real User Monitoring (RUM).
  • Monitor and analyze user activity and interactions with real-time data visualizations. For example, get a break down of feature usage by user.
  • Set-up real-time alerting and notifications based on usage levels, response times and other key SLAs.
  • Easily track marketing campaign activity with log-based pixel tracking.

References:

https://blog.sucuri.net/2010/03/the-importance-of-logging-for-web-applications-security-talk.html

https://dzone.com/articles/application-logging-what-when

https://geshan.com.np/blog/2015/08/importance-of-logging-in-your-applications/

https://stackify.com/best-log-management-tools/

--

--