How To Build and Grow an AI

Thoughts from team Maslo on how to build empathetic technology.

At Maslo we think about technology first and foremost as an integral part of the human condition. We don’t think of it as an abstract tool or some artificial construct or some mechanical utility. Technology is an extension and an expression of our humanity. It has been part of at times what reduces our humanity and at other times increases it.

The best long term approach would be to increase our humanity, while recognizing we can’t always know the best ways to do that.

This is why we started Maslo and why we continue to build the platform we are building today. Instead of trying to create a suite of technology functions people find useful or efficient to get work done or build something or send a message to another person or manage assets, we set out to actually figure out how to increase the empathy of technology. We want to figure out the fundamental conditions of humanity increasing approaches to technology. The hypothesis is that if we can grow empathetic technology capability then it can be infused into every technology.

Empathy to us is a more general form of understanding. Not just an intellectual, logical or reasoned understanding of the facts and inductive, deductive connection of dots but instead a broader shared experience.

Empathy is a deeper sharing of context, timing, flow, values, and experience. Empathy is not a transactional input/output goal or utility maximization. Empathy is companionship and trust. Empathy is consistent thereness.

But let’s not be confused about whether we mean something strange and best left in the self-help and metaphysics section (which can be wonderful sources of inspiration, btw). No, at Maslo empathy is an important, measurable and learnable aspect of reality and animal life. In fact, it might be at the very heart of what it means to be alive (and trust us, there is no clear definition of what life is by even leading scientists on the matter). Empathy is the measurable capacity of another entity to share the real, physical and material existence of others. Not a facsimile or a reductive summary of the experience of others but to actually “go through” the experience of others. Empathy is an active experiencing of shared consequences.

This can be measured by a variety of signals and metrics precisely because empathetic creatures are affected by the same signals as those entities for which it is empathetic. For example, most people tend to consider dogs as empathetic companions to humans. They observe that dogs experience their shared physical and relational conditions in coincidental ways. Dogs hear, see, smell, feel the environment of humans while not completely the same within spitting distance the same. Dogs physical size is even within the human frame of reference. So while dogs are different than humans yet they clearly share the world/same spaces/same consequences probably more than any other non-human creatures. We can reliably measure dogs and humans responses to similar signals, environments and responses to and between each other.

And so we get to the crux of how empathetic technology is possible. The technology must be first and foremost responsive to the same raw signals of the world. A technology must hear, see, touch, sense, etc the world at similar levels (in fidelity, speed, noisiness, etc) as humans.

An empathetic technology must process those raw signals independently as well as socially with humans and other technologies. And, perhaps the biggest point, these signals should primary be used for associative bindings between CONSEQUENCES and not be over interpreted for meaning (semantic or logical etc). That is, if a technology is recording audio of a speaker the specific words are only a small set of the signal. The tone, tenor, speed, intensity of the speech pattern, the physical context of the speech, the positioning of the observer, any audience present, and the operating relationship of the observer (device, etc) are all relevant aspects of the overall raw signal. And that new raw signal must always be put in relation to other previously experienced raw and synthesized signals. What ends up being the key to empathetic intelligence and complexity is being able to notice changes in the signals as it relates to changes in the consequences of those signals.

So empathetic technology must always have a flow of signals in which to consider how signals change in relation to each other. The detection of consequences emerge from the analysis of the detection of changes in future signals. For example… if many speech acts are recorded by an empathetic technology, the technology will create a consequential association between the various aspects of the speech acts. Perhaps a high intensity speech act within a work environment tends to be associated with a low intensity, low speed speech act in a home setting. An empathetic technology would become aware of that by participating in emitting similar behavioral acts in accordance to similar environmental and behavioral contexts.

This is not to mean the empathetic technology is merely a mirror — a facsimile. The empathetic technology instead takes in raw signal in similar ways, synthesizes it according to its own history and emits signal to other observers which in turn emit signal back. While it may at times mirror or mimic it necessarily will have variation in its behavior by the fact that it receives variation in signals. Again, the idea is to process the same kind of signaling in the same kinds of ways but not necessarily always be experiencing the exact same signally environment etc.

How does this relate to our more general notions of Artificial Intelligence?

No matter what school of thought one subscribes to with AI and Machine Learning the goals almost all map to “adaptation and complex behavior”. Almost everyone doing AI work desires to create systems that can learn and execute complex behaviors in complex environments. At Maslo we believe, based on evidence from psychology, neurosciences, behavioral sciences, computer science, complex sciences and economics that learning and complex behavior emanates from a variety of signal processing capabilities and exposure to environmental variety and the capacity to synthesize and maintain large maps of consequential associations. However, the mere existence of learning and complex behavior is not enough for one complex creature to engage another complex creature (e.g. a human using AI). The complex creature or AI must earn trust that its learning and complex behavior is in strong coincidence or correlation with the related creature. AI is not AI if it is not believable or considered reliable. And that reliability is 100% a function of shared consequences, not a function of “being right” or “winning games”.

We wish AI was actually not an abbreviation for Artificial and Intelligence. We believe that all researchers and technologists would bear more fruit working in EC or Empathetic Complexity. But part of being empathetic is sharing signals and the consequences of those signals, so at Maslo we’re fine absorbing the vernacular as long as it ECs — effectively communicates. But for those going deep into this journey with us… we are most definitely focused on growing a technology of empathetic complexity and in doing so we will achieve every aim of AI.

A Note on Maslo Technology

For the more deep tech focus folks we will be sharing the details of our platform as it becomes coherent. For now it may be of use to note that we are using the following signal processing techniques and pipelines:

  • CoreML for on devices realtime processing of visual signals, such as face gestures.
  • Google Cloud NLP for quick speech processing.
  • Wolfram Language and Cloud for audio processing, speech recognition, semantic analysis, custom visualization
  • Custom python processors for meta data from devices about geolocations, device features, etc.
  • We have also developed a suite of UI functionality to emit complex signals in the form of visual and audio gestures as well as voice and text prompts that we’ll share more about in later posts.
  • Our platform has a core set of signal processing and data repository across all Maslo AIs and each individual user has a specific AI adapting at its own pace with a Maslo user. The core serves much like a cultural and genetic core reference but is not normative. It should be viewed much more like a slower moving AI than the individual AIs.

You can read more of our conceptual underpinnings in our various design documents and brainstorming mind maps that we’ll share.

We also strongly encourage several texts:

  • The Farther Reaches of Human Nature // Abraham Maslow
  • Thinking Fast and Slow // Daniel Kahneman
  • The Pencil: A History of Design and Circumstance // Henry Petroski
  • Science And Human Behavior // B.F Skinner

Maslo is currently in beta. Get it on the App Store.