For a long time, flexibility wasn’t something anyone considered important when it came to logs. There were sysadmins and support staff sitting in a data center, or even in our own offices. People would take the hard route to see their logs: from remoting into the servers, to watching logs with tools like grep, sed, and tail.
This was easy enough when an application was on a single server, perhaps on a VM with a couple of others on a physical server in a closet or a nearby colocation facility.
But these days, we are deploying applications and containers with application code and data across multiple availability zones, sometimes across continents, in order to ensure our users’ needs are being met. This means new and more flexible logging solutions are needed.
Beyond the immediate need of dealing with modern cloud or on-prem infrastructure solutions, more and more people within our organizations need access to the information from the logs. CTOs are looking for information pointing toward infrastructure stability and sustainability. Coders are looking to know memory leaks are being handled or that code is running rampant and stealing server resources. And of course there are DevOps teams working hard to ensure the whole thing doesn’t fall over.
Let’s be clear, logging is not a new problem in tech. Logs have existed since people were punching cards and putting them into machines running on vacuum tubes.
More than ever, a log management solution to meet the needs of all these levels is necessary. Something high level enough for management, but capable of fine grained information for the people keeping the show running.
In the past, there were very few possible solutions that could meet the level of flexibility necessary needs of modern tech architecture. Many solutions focus on either only cloud or only on-prem, meaning anything with a hybrid setup was impossible. Many of the solutions all came from one organization — meaning there was one controller of updates, features, bug fixes, and all other changes. Many others offered the same service with different extra features, but the level of flexibility was still an issue.
Another major issue is pricing. Some solutions can balloon quickly as logs grow or there is a spike in the amount of information being brought in to the logging solution. Humio tries to mitigate that with chunk pricing and site license pricing for larger, on-prem installations. We try to take the guesswork out of the cost.
Beyond pricing, the flexibility to move with your needs also lives with the need to have a stable solution. There is no need to sacrifice stability or security for cost effectiveness. This is a trade off no one should need to make. By selecting a log management solution that was built with logging as the only focus, these are areas of focus Humio can offer over other options.
At Humio, we want to make it possible for anyone from any part of an organization to be able to see exactly what part of the logs directly relates to them. We want to make log management flexible enough to fit in a start-up with just a few folks working to make the next big thing, or a large corporation with applications, software, and hardware spread across the world and serving millions each day.
Humio’s query language is simple. Inspired by the unix command line tools ‘grep | sort | uniq’ it’s easy to get started. There are no complex JSON structures to fight, making it easy to iterate and try new combinations. Since the results come back fast, Humio invites you to just try out things. Users can start with simple filtering queries, and over time learn to build more complex analytics.
Humio has proper real-time queries. You can set up live streaming queries that are constantly up to date; even for large date/time ranges. In order to be able to feed a dashboard, for example, most other tools need to re-run the queries all the time. In Humio, a query can be setup as a ‘live query’ which essentially means that it does not take time to compute it. It’s always precomputed.
On the backend side, things are also quite different. While this may not seem important as a user, it makes a big difference if you’re operating your own log aggregation service.
Storage size is way smaller. When you load data into Humio, it’ll be compressed 5–10x; loading data into Elastic can often inflate your data by the same factor. Ingest is not a CPU hog. Humio does not spend much CPU load on ingesting data. This all part of being a flexible log management solution built for that very purpose.
We believe our flexibility starts with the ability to deploy Humio anywhere. Whether you are exclusively using cloud architecture, 100% on premises or in a colocation facility, or a combination of both, we work to give access to the logs on your platform.
Our philosophy is any system, any framework, any format. We support the open-source projects such as Logstash and Beats to help simplify sending data to Humio. Then the real work of analyzing the logs in order to take the actions necessary to maintain an application and deliver what needs to be delivered to users takes place.
Using the most modern techniques, Humio integrates, ingests, and delivers a thorough, flexible log management and analysis tool. The best way to get a sense of what we mean by ‘flexible logging’ is to try Humio in your own environment. But because we do appreciate that committing to a log management solution is a big, somewhat challenging, decision to make, we’ve made Humio available as a Free Trial, which you can run on-prem or in the cloud. Whatever you prefer. You can get started with a single click here.
When you’re ready, our engineers are standing by to get you started. In a future post, we will begin looking at how to add Humio to an existing infrastructure setup using kubernetes! Stay tuned!