Designing APIs for AI: The Need for New Standards
APIs weren’t built for AI. This simple fact is creating friction as companies rush to integrate AI into their products. While humans can interpret vague documentation or infer missing details, Large Language Models (LLMs) often fail when faced with traditional API specifications.
The tech industry has noticed. New protocols are emerging specifically designed for AI-API interaction. Anthropic’s Model Context Protocol (MCP) standardizes how AI models communicate their capabilities. These are early steps toward making APIs truly AI-native.
Understanding Current API Standards
Current API standards assume human intelligence. Developers reading OpenAPI documentation bring years of implicit knowledge with them. They know what rate limits mean in practice. They understand how to handle a 429 error. They can read between the lines of API documentation.
LLMs can’t do any of this. When documentation says “use this endpoint sparingly,” human developers automatically think about caching and batching requests. LLMs miss these nuances entirely. They can’t infer unstated best practices or understand implicit constraints.
Take GraphQL. Its query language makes perfect sense to humans. Its error messages assume human debugging patterns. Its entire introspection system was built for developers to explore and learn. None of this works well for AI consumption.
New Protocols for AI Integration
The industry is responding. Anthropic’s MCP creates a standardized way for AI models to communicate what they can and can’t do with an API. Think of it as a negotiation protocol between AI systems and the APIs they’re trying to use.
Both Wildcard and Composio are tackling the fundamental challenge of making APIs inherently understandable by AI systems. Wildcard’s Agents.json reimagines how APIs should communicate their capabilities and boundaries to AI consumers. Similar to how robots.txt guides web crawlers, Agents.json aims to provide AI systems with clear, structured information about how to interact with APIs effectively.
Composio is working along similar lines, developing specifications that could reshape how APIs are described and consumed by AI systems. These aren’t just layers on top of existing standards — they’re new approaches to making APIs AI-native from the ground up.
Changes in the API Ecosystem
Companies building APIs face new questions. Should they maintain separate specifications for human and AI consumers? How can they serve both audiences effectively? The challenge goes beyond technical specifications.
APIs encode business rules through rate limits, pricing tiers, and usage restrictions. These rules need translation into formats AI systems can understand and respect. AI needs to know not just how to make an API call, but when it’s cost-effective to do so.
We’re moving toward APIs that must be bilingual — speaking clearly to both humans and AI systems. Some companies are already building dual specifications. Others are waiting to see how standards evolve.
The Path Forward
The rise of AI is reshaping how we think about APIs. OpenAPI and GraphQL revolutionized human-API interaction. Now MCP and Agents.json are taking first steps toward standardizing AI-API communication.
The industry needs new standards bridging human and AI API consumers. Today’s protocols might evolve or be replaced. But their core mission — making APIs truly AI-native — will only grow more important as AI systems become more prevalent.
Companies and developers should watch these emerging standards closely. Tomorrow’s APIs will need to serve both human and AI consumers effectively. That future is being built now.