AD4M & Flux: Beyond Applications

Nicolas Luck
Coasys
Published in
11 min readApr 3, 2023

--

After some years of informal partnership with the Flux (and former Junto) team, we recently agreed to formally merge, and spawn an organisation that both stewards the development of AD4M as an open-source framework and builds Flux and other apps on AD4M as well as services for the projected AD4M ecosystem.

This article combines a historic overview and current state of the project with an introduction to the core ideas of AD4M.

TL;DR: AD4M [pronounced: Adam] is a new distributed application framework and social network engine, extending Holochain’s agent-centric philosophy and architecture into a spanning-layer in order to transcend and include other (distributed and centralised) data storage, sharing and integrity solutions. Its core innovation is the introduction of a layer of subjective, semantic Perspectives as the main interface for the interaction with shared data, which allows for seamless interoperability between apps. Ultimately, AD4M provides a way to replace the whole concept of applications with Social Contexts, Neighbourhoods, Social DNA and Social Organisms as the the scope for semantic integrity and coherent communication.

So it’s about time for an update.

Almost 5 years ago, in “Holochain — Reinventing Applications”, I talked about an agent-centric browser, based on a social API which allows for much greater levels of interoperability and evolvability of a distributed social network — all built on Holochain — basically extending its agent-centric architecture beyond the scope of monolithic apps and rethinking how users could interact with a social-network-like ecosystem of Holochain DNAs/DHTs.

Shortly after publishing that article in March 2018, we conducted a pretty successful funding campaign for Holo and were very busy building Holochain. There were some reactions to this article, but it wasn’t until I left the core team for a sabbatical in 2020 that I had the space to really engage in this direction.

Among those reactions were Eric Yang and Josh Parkin who were building Junto at the time. Eric and Josh had visited me in Germany in 2019 and then again in 2020 when I had more time and space. They were excited to see some of these ideas, especially dynamic and user defined expression languages (i.e. pluggable DNAs/DHTs) implemented in Junto. Personally, we connected very well and I found myself on weekly calls with Josh during my free time in 2020, hacking away and helping him architect the Holochain implementation of Junto, based on the direction I had laid out in ‘Reinventing Applications’.

There was an interesting constraint the Junto project was facing that would end up unlocking one of AD4M’s most compelling and powerful architectural principles.

The Junto team had already built a centralised alpha version of the app and started using it with a few thousand alpha users — and they wanted to migrate over to Holochain slowly, using both, the centralised back-end and the Holochain hApp at the same time.

As sketched out in my previous article, I had already thought about an “inter-app URL-schema” to point to objects in different hApps, and about storing these links in what I called a Social Context — i.e. a kind of meta-hApp / DHT. With Junto wanting to keep using their back-end alongside Holochain, I realised two things:
1. Once we have this meta-DHT that stores URLs pointing into other hApps, we can treat the centralised back-end in the same way we treat these hApps and have the URLs encode what back-end they are pointing to — centralised API or a hApp. So that way we can mix data from different storage back-ends within the same Social Context.
2. Doing this while keeping most of the application logic within the storage back-ends would produce a lot of redundancy and a maintenance hell as the app development progresses. The logical next step seemed to be shifting some (or most?) of the application logic over to these meta-DHTs / Social Contexts. Basically, what that would imply is using associations between the objects (or object references, i.e. URIs) to encode application states. While this seemed foreign at first sight, I realised that it is fundamentally not too different from the semantic web approach — combining objects referenced by URI in subject, predicate, object triples to form a meaningful statement.

Realising this, it felt like something clicked into place, and I could see how what I called Social Context so far is just a specialised case of a deeper concept: a constellation of subjective associations between objects (i.e. objectively experienced, structured entities), which I started to equate with and call Perspective. There are not only social contexts, but subjective context (framing objective information) in the general case. The relations between the parts that inform the nature of the whole constellation — the Gestalt.

For humans to really understand each other, it is more important, I believe, to communicate this contextual information (“What does this expression mean in this context, versus another context?”) instead of assuming an objective meaning of a singular expression.

Apps, the way we build and use them so far, usually only store objective data and convey this contextual meaning through their UI, which then needs to be the same for all users.

Social Contexts — digital spaces in which users communicate complex meaning — can be modelled as shared or at least overlapping Perspectives —i.e. shared associations. AD4M’s term for these shared perspectives is Neighbourhoods. But let’s take one step at a time…

A New Hope

This insight into Perspectives happened around August 2020. Since it felt like something I had been dwelling on for quite some time finally falling into place, I decided to actually go for it and build an MVP of such a system. A decision that resulted in a never ending sabbatical…

Not too long after this, several weeks into me building the first core components, the Junto team decided to start building a side app to Junto, which put communities into the focus. Josh, being close to my work, decided to start building it on AD4M (which was called ACAI back then) to begin with. For that purpose we created the Perspect3vism Github organisation and separated my initial work into separate components and repositories for him to use the same core and back-end components, but different UI.

That Junto side-project was renamed to Flux and became the main project. What started as a collaboration between Josh and me, grew over the course of more than 2 years, and eventually merged into a team effort between Flux and Perspect3vism.

I am very happy that with the beginning of this year, 2023, we finally formalised this co-creation and are now actually going forward as one team, working together on Flux, AD4M and future apps and components within the projected AD4M ecosystem!

So that might be a good moment to talk about what we built so far, and where we’re heading…

Agents, Languages and Perspectives

Full explanation of AD4M — from Languages, over Perspectives to Social Organisms

AD4M is an abbreviation for “Agent-centric Distributed Application Meta-ontology”. The core of this meta-ontology consists of 3 classes: Agents, Languages and Perspectives with the following connotations:

Agents are users, represented by their self-sovereign identity through cryptographic keys, referenced by DID URIs.

Languages are meant as the tools agents use to exchange (objective) expressions. Languages both define the expression syntax and are the components that store (and distribute) the actual data. Instead of Languages and Expressions we could say “Storage” and “Object”, or “Network” and “Entry” — but these words would not convey that everything in there needs to be authored and signed by an agent.

Expressions are basically data objects that were spoken into being by an Agent, using a chosen Language.

It’s not by accident that these concepts, Agents and Languages, map perfectly to what Holochain already is (with the exception of DIDs as the agent references*). Every Holochain DHT can be treated as an AD4M Language with a bit of wrapping JS code — and so can other storage layers.

(*but we have a plan for creating cryptographically secure mappings between DIDs and Holochain agent keys..)

To make a logical space with multiple Languages practically usable, we need a way to address Expressions throughout different Languages. AD4M sticks to the common URI standard with one trick: using Languages’ source code hashes as the schema. This results in…
1. Every expression being uniquely and globally addressable through:
<language address>://<language specific expression address>
2. Backwards compatibility with the web through HTTPS resources being expressions of a special HTTPS Language.

Perspectives — the final and crucial addition to this meta-ontology — are the spaces in which these URIs are stored, associated and contextualised. Perspectives are really just graphs in which the nodes are Expression references (i.e. Expression URIs). Basic Perspectives are always local, private and fully controlled by Agent/user.

AD4M provides an app/UI interface in which Perspectives are the main building block and access point.

AD4M as spanning layer

Subjective vs. Objective Data

AD4M is a spanning-layer that is located in-between application user interfaces and various storage back-ends — like Holochain, blockchains, IPFS and of course central databases and APIs.

Another way to summarise the three core concepts, Agents, Languages and Perspectives, would be to start with the user in the centre and then model subjective and objective data around them, attributed to two different domains:

The Languages store objective data: every agent resolving an expression URI will get the same objective information. In so far, Expressions are objects.

Perspectives represent subjective data: agents are free to add new associations (i.e. links) to any of their Perspective and introduce new semantic structures and even newly developed Languages that way. Perspectives are local and private in their basic form.

Apps (i.e. User Interfaces) approach this stack from the Perspective, the subjective side. As an app developer, you will mostly interact with Perspectives, adding and removing links. You can treat AD4M Perspectives as your built-in graph databases. Just remember that in the general case, your code is not the only code accessing this graph. The malleability of Perspectives just being graphs over Expression allows different UIs to interact with just the parts they are interested in.

Having these two distinct domains (subjective Perspectives and objective Language Expressions) allows you to roam within the spectrum between subjective and objective data. For example, when implementing a calendar item there are few properties that better be made part of the objective Expression, and thus implemented in a respective calendar-event-Language: Exact time and place, and maybe the title.

But for interoperability reasons, it makes sense to have other associated data be modelled as subjective links in Perspectives. For instance: an agent’s planned attendance, text comments, tags of other agents.. We could also add these to the calendar event Expression and making this information objective (and thus also public to everybody who can install the Language), but what if two distinct groups want to share comments about a public event within their private group…?

Neighbourhoods & Social Organisms

If Perspectives are local, private and unstructured, what use can they be for multi-user, social and communication apps?

Remember the term Social Context? We have formalised this concept with the name Neighbourhood, which are essentially shared and synchronised Perspectives.

Neighbourhoods (and also Social Organisms) are part of the AD4M ontology, but they are not basic entities.

A Neighbourhood is really a composition of the three basic building blocks: 1. multiple Agents,

2. each having a local Perspective that is their window into the Neighbourhood,

3. and a particular Language for sharing the links each agent adds to and removes from their local Perspective. This LinkLanguage defines Perspective links as Expressions and thus enables Agents to talk about their Perspective changes.

When creating a Neighbourhood, a specific LinkLanguage has to be selected. We envision several different LinkLanguages to emerge in the future, build on different technology stacks. We have a well working Git-like CRDT implementation on Holochain, that Josh and I built and can serve as a generic back-bone for Neighbourhoods right now.

AD4M automates this LinkExpression sharing in a way that is hidden from the user or app developer. All that is needed is to either share a Perspective as a Neighbourhood (specifying the LinkLanguage) or join a previously shared Neighbourhood which will create a local proxy Perspective and install the LinkLanguage. Editing a Perspective that is entangled in a Neighbourhood will have the AD4M runtime automatically exercise the LinkLanguage to synchronise with the other Agents.

So Neighbourhoods are really inter-subjectively shared semantic graphs, that apps can use as generic group collaboration spaces.

For users, Neighbourhoods exist outside the scope of a specific app. Meaning that the Communities a user sets up in Flux can be seen and used by other apps as well. No need for users to ask the whole community to also sign-up in this new project management app. Just connect another UI to your local AD4M agent and access the already existing Neighbourhood.

Just to mention this briefly here, Social DNA is code that runs on-top of a Perspective’s/Neighbourhood’s links, and Social Organisms are Neighbourhoods with fixed Social DNA. In the context of a Social Organism, Social DNA is used to define what a coherent Expression of the whole is, such that — from the outside — a Social Organism can be treated like a composed super-agent, or agent-centric DAO. Social DNA and Social Organisms deserve their own future article.

Why the complexity?

You might think we could have arrived at these Neighbourhoods in a simpler way. Why the detour and philosophical talk about Agents, Languages and Perspectives?

Breaking it down to these fundamental concepts allows for rendering the whole system evolvable. We just need a way to enable agents to talk about the core aspects of the system itself. For that purpose, there is a set of bootstrap Languages, that make up the back-bone of the agent-centric web, that AD4M implies:
1. An Agent-Language, mapping DIDs (agent addresses) to AgentExpressions
2. A Language-Language, sharing Languages (i.e. their executable code) and making it possible to resolve Language hashes to their code, and
3. A Perspective/Neighbourhood-Language, making it possible to share a Perspective to a Neighbourhood and reference that Neighbourhood through a global URI (like: neighbourhood://QmbNH3q35Yz[…]LCdLvDnqU).

With this self-referential trick, the following becomes possible:

Josh is interested in Holepunch’s Hypercore. So he writes an AD4M Language for image files that uses Hypercore as the storage back-end. When he’s done, he publishes that Language to the AD4M Language-Language, which yields an address (lang://Qm5A…). In our Flux Neighbourhood, he starts using that new Language, creating Expressions and linking these images into our channel. Once my node discovers these new URIs (Qm5A://<image>) through the synchronisation of our Perspectives, it will try to resolve that new Language through a look-up in the Language-Language and find the code, plus some meta information (it’s a Language wrapped as an Expression, so it has an author and a signature). Since Josh is my friend and in my list of trusted agents, the AD4M implementation will proceed to install that new Language automatically and resolve those Expression URIs to retrieve the images from Hypercore.

So, conceptually speaking, we only require nodes to understand the core ontology of Agents, Languages and Perspectives (in their fleshed-out form) to create an interoperable and evolvable, distributed social space. We can boot Neighbourhoods implemented on any storage technology and any other specialised Expression Languages from there.

With that property, AD4M pragmatically spawns a browsable, adaptable, distributed, agent-centric web — along with being a framework for apps being build on top of it.

Feel free to join the AD4M Discord and try the AD4M app development tutorial to learn more about building next-gen communication infrastructure with AD4M.

--

--

Nicolas Luck
Coasys

Co-Founder and Chief-Architect at Coasys, inventor of AD4M. Former Holochain core developer. Working on tools that enable/promote social organisms.