An intuition pump: APIs as cognitive tool — Part 1
meaning and concepts in programmatic interfaces
For a proper background to enjoy this post take a look to these links about:
Intuition pump: for an overview about how philosophical tales and thought experiments are powerful tools for a productive and generative idea about
intentional stance (briefly, the semantic layer with which we explain causal relations, the collections of meanings and their relations as result of cognitive processes and experiences).
Two Black-Boxes experiment (Dennett, 2010), a thought experiment about two connected machines embedding the same contextualised knowledge of the world and how they can be reverse-engineered to result in collections of meanings (e.g. abstractions that enrich somebody’s intentional stance).
Hydra Ecosystem, a set of tools to design-deploy semantically linked Web APIs. In early development by myself and other contributors in this organisation. We will try in Part 2 to explain how a client-server system works in terms of the concepts built from the Two Black-Boxes Experiment.
If you are quite confident of your understanding of what a Web API is, you can skip to paragraph II.
I. APIs (Application Programming Interfaces) are powerful layers between programs that allow “wiring” of different modules (maybe created by different contributors) with simple programmatic function calls. Integration of software is eased on by just calling functions and forgetting about the underlying implementation; APIs are like membranes with well-documented receptors, that accept some specific proteins and produce expected results. They are bridges over implementation intricacies: a program can ask for an operation and expect the right value to return, without knowing thing about the complexity required to compute the value. Every layer and module of a program is kept together by a multiplicity of APIs; those, at their simplest description, map a name to an operation. Call “ADD 1 3” to receive “4”, ADD is the name followed by inputs, usually what ADD is supposed to do is well documented so the outcome is an expected output ready to be immediatly used by the caller.
Extending this concept to Web APIs using a Client-Server paradigm implies having two different specialised systems that ask-answer computational questions; the only difference is in the fact that the name to be called is a URL (not “ADD” but “example.com/add” ) and there are vastly complicated protocols to allow exhange of inputs and outputs (most of current usage of Web APIs sound more like, for example, “RETURN DATUM x FOR USER y” as, in particular, REST paradigm is about providing data and not about performing operations; for operations like “ADD x y” there are other paradigms that apply like RPC).
II. A system that uses Web API is probably a geographically and semantically distributed artifact, it is the task of the developer that design programs to take advantage of a call to a faraway service. Hopefully these URIs and their inputs and outputs are well documented. This is obviously the foundation for Anything as a Service (I will avoid using an acronym here) as we know it. The developer, according to his/her intentional stance, provides the context for creating a sequence of operations that become an effective process; in the larger picture, the process is part of a product and/or an endeavour.
Leveraging Semantic Web technologies (see Linked Data), developers try to add a layer of abstraction to their Web APIs to allow more powerful applications. In particular W3C Hydra allows semantic annotations of Web APIs using Linked Data standards. What does it mean and how we can describe the level-up that these technologies allow? How the role of software and data exchange is improved by the addition of context (via meaning/semantics) to processes? How, if possible, can APIs be programmatic intuition pumps for software designers/developers in the scope of a particular domain?
III. We can try to build a framework to answer the questions on the lines above by creating a parallel interpretation between the Client-Server system we created at Hydra Ecosystem and the ideal system as described by the Two Black-Boxes Experiment (TBBE); this allows to have a biunivocal perspective on what intuition pumps are, but also what Semantic Web APIs can achieve.
Roughly (please take a glimpse to the linked paper), TBBE delivers a narration from the point of view of some rational investigators analyzing a system made up of two boxes (A and B) with completely hidden mechanism, that are connected by a cable. The first box has two buttons (alpha and beta), the second box has three bulbs (Red, Green and Amber). We can here finds out the initial affinity with APIs, both boxes are themselves (sub-)systems that are accessible via interfaces: the investogators can use the buttons (interface) in box A, box A can use the cable to send a signal to box B, box B returns a value via the bulbs (their interfaces are highly non-uniform, very bad non-standard situation; exactly the opposite of a well-designed software).
Investigators and boxes are all parts of a client-server system in which the machinery of each part is completely inscrutable to the other (a very badly documented set of APIs). Moving on with the investigation it appears evident that the system has a very basic behaviour: pressing alpha triggers the light R on, pressing beta triggers the light G on, light A seems never triggered. These seem, tought, a very limited set of experiments and the lack of documentation does not allow to know the functioning of the single parts.
The investigators move on into disassembling each part. Briefly, all the details in the paper, they find out each box is a very evoluted expert system; each expert system has been created by engineers with quite different backgrounds, but both rely on the same assessments over some true/false statements about some contextualised knowledge. Eventually what the full system is doing in reality: box A was stating a truth (button alpha) or a falsehood (button beta) and box B was confirming a truth (bulb R) or a falsehood (bulb G) or a malformed statement (bulb A). The correlation between buttons and bulbs was not the result of a direct wiring of circuits but, instead, the “agreement” about some statements based on the common reality that the two engineers share of the “world”. The TBBE is full of nuances about how the investigation is carried on and it is very intriguing in establishing boundaries about what is known and what this knowledge implies. I leave to reading further discoveries.
My take on this great thought experiment: TBBE is, generically, about analytically tackling the behaviour (syntactic engine) of a system and trying to establish an intentional stance in the mind of an investigator. A syntactic engine is the manifestation of patterns (simply, the observed behaviour) of a system (i.e. a non-random sequence of 0 or 1, a sequence of characters, etc.); the investigators do not accept the first superficial obervation of patterns, they decide to go with finding out the real flow of information between the boxes by opening them, and they discover a very high complexity in data and encoding. Initially their map of meaning (stance) is very limited and the syntatic engine they witness is so simple that it doesn’t really deserve explanation (one button, one bulb); but, as the investigation goes on and data and patterns start to be highly explanatory of a complex behaviour, their stance evolves to mirror what they observe to be an two-component expert system and all the meanings that the components embed (the shared “world” of the engineers that far away from each other created the two black boxes).
Every cognitive process tries to observe something that intially looks gibberish (highly probable, noise) by analysing recurrences of patterns (highly improbable) to assign relations into maps of meanings (Predictive Coding tries to explain in more detail this concurrent bottom-up and top-down process of adjusted predictions between “syntactic engines” produced by senso-motorial data and high-level representation). What we are initially observing is a syntactic engine made of data streams, there is no “cognition happening” in there and our initial stance is confused. As much the analytical processing goes on the stance improves, and we can better “make sense” of the data.
As we can imagine using a clean and well-documented API is the a great way of entering a specific domain. If you are developing software for any purpose or starting from a specialised background, APIs are gateways to cross-domain awareness; they are great tools of organising knowledge so by consequence to learn principles and make users more familiar with specific domains. Are APIs programmatic intuition generators? Maybe not, but I like the idea of looking at APIs as a valuable entrypoint to many scientific domains.
On the cognitive side, this experiment can be interpreted as a good hint for the strict relationship between meaning and context. Intelligence (information) emerges from data and patterns (from a syntactic engine); assigning meanings to patterns is the domain of a stance, these meanings once “discovered” are strictly bound to a context (the network of relations in which meanings are defined). The intentional stance “store” meanings and relations among those meanings (context); generation of contextualised meanings is a big part of our cognitive activities. In the domain of software engineering, the best way for delivering meanings is to work with clean and well-documented APIs.
Intuition pumps are well-established thinking tools in Philosophy. Is it possible to use APIs functions as building blocks to construct inutition pumps for specific domain. e.g. Which level of understanding of mathematics does reading a math library’s API provides? Does a software engineer mind represent intentionality in terms of APIs? Is this useful to acquire actual knowledge of the domain?
These questions are important as is vital to understand how we experience knowledge and learn that is the field of Cognitive Sciences.