Seeing the Whole Elephant: A Lesson in Adopting API Analytics
You may be familiar with the ancient parable of the five blind people who each grab a different part of an elephant’s body. Each person is convinced of what the elephant is like based upon the anatomical region they’ve grabbed. To the person with a leg, the elephant was like a tree. To the person with a giant floppy ear in their hand, an elephant is like an enormous leaf. And so on.
Despite being regularly referenced in business articles and presentations, the parable’s themes remain resonant: at most companies, focusing on isolated details rather than the big picture is a real concern.
But consider an inversion on this theme: five people who can all see, all staring at an elephant. “Yeah, we see it,” they’d say.”It’s an elephant. A giant elephant. Now what?”
This inversion represents a different kind of risk: seeing a “big picture” event — like a newly deployed piece of technology — and not being sure what to do with it.
Indeed, in my work on Google Cloud’s Apigee team, this is a situation I’ve seen with some large enterprises embracing application programming interface (API) management and particularly API analytics.
The people within these enterprises can see the analytics “elephant” — the dashboards, the tools — but they aren’t sure what to do with it. It could be easy to focus on certain metrics and trends to the exclusion of others, risking becoming like the blind people grabbing the elephant. But it’s also easy to see topline trends and ignore more meaningful signals underneath — or even to just to tune out the analytics functions of the management software entirely.
The challenge is that even if a business understands that API analytics are important because APIs capture all transactions between backends systems and developers and end users, that awareness doesn’t lead to improved outcomes.
Suppose a business adopts an API analytics solution. That solution may not accomplish anything simply by virtue of being rolled out. Tracking for tracking’s sake won’t produce helpful insights. While unexpected patterns may be discovered serendipitously from time to time, analytics should generally be geared toward answering known questions that are urgent to the business.
In these situation, with a company focused on not merely adopting API analytics but benefiting from them, we typically advise that it makes sense to look at different categories:
- information relating to the incoming calls from users
- information relating to the API platform itself
- information relating to the systems that sit behind the API platform, i.e., the back-end systems
Within each of these areas are questions that may be urgent to the business.
Questions relating to incoming calls from users include:
- What is the overall traffic related to a specific set of APIs? Is it what was expected? Higher or lower than expected?
- Are the users coming from the expected geographic regions?
- Is there anything about the time of day of these calls that might reveal something about how users engage with the APIs and underlying digital assets?
- If financial or purchasing transaction occur through the API calls, are the amounts what was expected?
Questions relating to the API platform itself include:
- Are there errors or failed attempts that may be increasing customer frustration?
- Is there a particular API not getting used? Why is that? Is the API exposing data or functionality that is less valuable to developers than expected? Is it because of bad or complicated design?
Questions relating to the back-end systems:
- How long are the back-end systems taking to respond? Are latency problems spiking periodically or creeping up gradually over time? How does average latency compare to any service level agreements (SLAs) offered to users of the API?
- What is the overall response time? Is it consistent with what users expect from modern apps?
Topics like these are almost always urgent, so focusing an analytics effort on them will help the analytics investment to gain traction. These predictably and reliably important topics allow an API analytics solution to provide immediate impact, with other less obvious questions and insights developing more organically over time. They help reveal things that are important for the business and for generating revenue, rather than just collecting analytics data and hoping without vision or agenda that it will somehow be useful.
Devising questions is only a start. The next step is to drill into these questions and examine the technical, marketing, sales, and usability dimensions.
For example, if most of the API calls are coming in late at night, one assumes that the customers are at home or at least not at work. Does that affect how the APIs are marketed? Might this information impact the design of the UI to make it softer and less business-like?
Or perhaps a particular API is not getting used. Is the API too complicated (i.e., a technical dimension) or has it not been shown to the right audience(i.e., marketing)? Is it priced too high (i.e., sales)? Is something in the app design causing a flow to be abandoned (i.e., usability or UX)?
Answering one or two of these questions, let alone all of them, could significantly impact a business’s ability to engage with developers and participate in the digital economy. Once a question is answered and changes are made, an enterprise should go through another analytics cycle and either make more changes or move to the next question. Rinse and repeat.
In other words, by starting with the right questions relevant to the business, an enterprise might just be able to address that elephant in the room when it comes to API analytics — and indeed, the deployment of many new technology tools in general.
[Looking to learn more about API monitoring and analytics? Get your copy of our recent eBook, Inside the API Product Mindset: Optimizing API Programs with Monitoring and Analytics.]