The Future of APIs

Zdenek “Z” Nemec
Good API
Published in
10 min readDec 19, 2016

--

I don’t think we talk about the future of APIs nearly enough. I don’t remember a discussion about what the API future holds. Not a single one. But if we start to think about it, we must feel APIs as we know aren’t the endgame. They won’t stay here with us forever. So let’s take a look at what might happen with APIs in the future.

The Turk

Our story starts in the year 1770 in the Kingdom of Hungary within the Habsburg Empire. There, Wolfgang von Kempelen is building a machine capable of playing the game of chess. His plan is to compete against the best chess players of the time.

With his machine finished, he impresses the court of Maria Theresa of Austria. Kempelen and his chess playing automaton are quickly becoming famous, defeating most of their opponents during demonstrations around Europe. The audience includes statesmen such as Napoleon Bonaparte and Benjamin Franklin.

The machine, seemingly operated by a torso and head dressed in Turkish robes was, in fact, a mechanical illusion. It was controlled by a human operator hidden inside. The Turk was a very elaborate parlor trick. A hoax, designed to make others think they are competing against a real machine. The secret was fully revealed only in 1850.

Ever since we use the term “Mechanical Turk” for a system that appears to be autonomous but in fact, it needs a human aid to operate.

Aliens

The year is 1963, and American psychologist and computer scientist J.C.R. Licklider is writing the Memorandum for Members and Affiliates of the Intergalactic Computer Network.

Licklider is one of the most important figures in the history of computer science. He is a true visionary. He foresees modern graphical interfaces and contributes to the creation for ARPANET and Internet.

In his memo, Licklider is asking:

“How do you get communications started among totally uncorrelated sapient beings?.”

Imagine there is an immense network spanning multiple galaxies and connecting intelligent species that never met before. The question is: How would they communicate when they meet?

The answer is rather simple: They have to figure it out.

Similar to 2016 movie Arrival, these beings have to probe each other, observe and note down the reactions to establish a vocabulary. Later on, they can use this dictionary to have a meaningful conversation.

Turk’s End

We are now moving thirty-three years forward to the year 1996. IBM’s computer Deep Blue wins its first chess game against the world champion Garry Kasparov. But in the end, Kasparov defeats the machine in six-game match losing only the game one.

IBM upgrades the Deep Blue. One year later the computer wins the six-game rematch 3½–2½ becoming the first machine to defeat a reigning world champion.

It took us some 227 years from the original Von Kempelen Turk to construct a real machine that was able to beat the best human players.

Second Rise of Turks

Only three years after Deep Blue, in the year 2000, Roy Fielding is publishing his dissertation on Architectural Styles and the Design of Network-based Software Architectures. This work will be later better known for the introduction of the REST API architectural style. It gives us the blueprint for emerging HTTP-based Web APIs.

The same year, Salesforce is releasing the first version of their Web API to automate the sales workflow. eBay joins Salesforce later, and the rest of prominent Internet companies follows.

(source)

But there is something odd about these Web APIs. They seem to be a machine to machine communication. But in reality, they aren’t.

We like to think there is one machine exposing an interface, and then another machine using it, but this isn’t entirely right.

In reality, a service publishes an interface; then a human writes a lot of documentation and shares it.

Another human has to come over the documentation and read it. With the knowledge gained the person can program another machine to use the interface.

Essentially we have this intermediary layer in the machine to machine communication that involves us, humans. Hence, what seems to be a purely matter of machines from the outside is, in fact, the new Mechanical Turk.

Golden Age of Turks

Similar to the success of World Wide Web, the businesses soon realized the values of having their wholesale presence on the Internet. With the strong business drivers behind APIs, we are a witness to the boom of Web APIs.

(source)

But with the exponential API adoption and soaring API economy, new problems arrive. And in most cases, these problems has to do with humans hidden inside the API Turks.

The problem with API Turks

Every API that is around long enough has to deal with the following challenges:

- Synchronicity
- Versioning
- Scaling
- Discovery

Synchronicity

With API Turk we are building and sharing the “Licklider’s vocabulary” ahead of time. That is, we write and share the API documentation before the two machines meet. And even if we ignore the fact that humans can misinterpret the documentation, there is clearly a problem if someone changes the API and we are using an outdated documentation.

Keeping the API documentation in sync with the implementation is very hard. But maintaining the client up to date with API changes is even harder.

Versioning

The problems with synchronicity lead us to the problems with versioning. Since most of the API Turks are not following the Fielding’s REST principles, the API clients are usually tightly coupled with the interfaces used. This tight coupling creates a very fragile system. A change in API can easily break its client. Furthermore, human intervention is needed to update existing client to an API change. But relying on people to react on API change is costly, slow and in most cases not feasible because the clients are already deployed.

Because of these problems we are afraid to make changes. We aren’t evolving our APIs. Instead, we build new APIs on top of existing APIs, polluting the API codebase. We are increasing costs, technical debts, and having endless discussions how to solve versioning problems.

Scaling

Since they are humans hidden in API Turks, we need to hire more people to build more API Turks. And also to make more errors. We are humans; we make errors.

No matter how many people we hire, we can move only this fast when it comes to writing or reading documentation or adapting existing program to API changes. Hiring more people hardly scales for building many APIs, but it doesn’t scale at all for improving the reaction speed.

Finally, with often vague vocabularies there is a plenty room for error and missed expectations. Where one person expects a post’s title, another person might see article’s heading.

“Wait, they aren’t the same?!?”

Discovery

Lastly, there is a problem with API discovery. How do we learn there is a service we might want to use? Perhaps there is already a service that would enable us to build the next big thing, or just simply that would save us some precious time.

API providers don’t know how to market themselves. It doesn’t matter if there is a better geolocation service than Google Places API, we will never learn about it.

Word of mouth and Google search are lousy tools for API discovery. And as with every human-operated tool, this isn’t going to scale.

The Solution so Far

Over the last decade, we were developing processes and tools to address these problems. Together with bringing more people in, we have created the API industry. The economy that produces and sustains our new Mechanical Turks.

API Workflow, API Style Guide, API documentation best practices, and other company processes are deployed to ensure the synchronicity, prevent breaking changes and mitigate human errors. We built more tools to bolster these processes and to support our API products.

We have started to generate documentation and code to achieve synchronicity. We are developing complex test suites, and hiring more developers to sustain all of this. It isn’t uncommon for a company to have a dedicated developers team just for API documentation. Let me rephrase it: We are employing developers to build documentation for other developers, so they can understand a machine’s interface and program another system to make use of it.

As friend of mine says:

“Programmers tend to solve programming problems by more programming.”

As far as the discovery goes, those API providers fortunate enough to have the money, are spending on marketing and PR activities. Others hope to get some spotlight by enlisting in API catalogs or being noticed and remembered on Hacker News.

Human role in M2M Communication

So why are people needed in API Turks? What is the human’s role in Machine to Machine communication?

People play a critical role in the API discovery and understanding. Once we find a service, we have to understand WHAT exactly we can do with it and HOW to do it.

For example:

(API Discovery & WHAT): “Is there a service that gives me a weather forecast for Paris?”

(HOW): “How can I use this service to retrieve the weather forecast for Paris?”

Once we have the answers, we can program the API client. The client will then work autonomously until the API (or our needs) changes. Of course, this assumes the API documentation is in always sync with the implementation.

Autonomous APIs

If the human role is expensive, slow and error-prone, how can we eliminate it? What would it take to create fully autonomous APIs?

First, we need a way to develop and share domain vocabularies. The next step is to start sharing understanding at the runtime. Then an API discovery system can register the new API altogether with its vocabulary.

The workflow in an autonomous system without human intervention could work as follows:

A machine exposes its interface together with a profile describing the interface and its vocabulary. This service registers itself with an API discovery service.

Later, another program queries the API discovery service using terms from a vocabulary. If found, the API discovery service returns the matching service back to the program.

The program (now API client) has already been trained to work with a requested vocabulary. It now can use the API to perform the actions needed.

The client is programmed declaratively for a particular task, not tightly coupled to a distinct service interface at a particular point in time.

To illustrate, here is a snippet of a program that displays temperature in Paris:

Such an approach not only allows for clients resilient to API changes, but it also brings the reusability across multiple APIs.

For example, you would no longer develop a weather application for a particular service. Instead, you would build a generic client that knows how to display a weather forecast. This application could use various services such as AccuWeather, Weather Underground, or any country-specific weather forecast provider as long as the service uses (a portion) of the same vocabulary.

To wrap it up, the building blocks of Autonomous APIs are:

1. Vocabulary Registry
2. Understanding shared at a runtime
3. API discovery service
4. Programming for vocabulary not data structure

The Arrival

So where are we with all of this at the start of 2017? The good news is we have the building blocks, and they are getting the traction.

We are starting to share the understanding at a runtime. HATEOAS controls are shared using one of the hypermedia formats. The JSON-LD format adoption is increasing across the API industry, and search providers like Google, Microsoft, Yahoo, and Yandex backing the Schema.org vocabulary.

Formats such as ALPS are giving us the power to carry semantic information on both data and affordances. At the same time, GraphQL Schema can be introspected at the runtime to discover the surface of a GraphQL API.

Finally, dedicated API catalogs are starting to appear with HitchHQ and Rapid API leading the movement.

Conclusion

In my version of future of APIs, we will eliminate the human element from the API documentation, discovery, and consumption. We will start program API clients declaratively using vocabularies and information shared at the runtime.

Doing so will rewards us with reduced costs, fewer errors, and reduced time to market. With autonomous APIs, we will finally be able to evolve APIs, reuse clients, and scale APIs infinitely.

--

--