The Maslo platform is an ever growing network of signal processors that collect, synthesize and emit signal at different memory scales.
Roughly modelled on the latest notions of biological memory and the latest stuff in a variety of machine learning methods. On the machine side I myself have been using “probably approximately correct” methods for awhile with various ensembles of supervised and unsupervised learning. The human science part comes from my long time interaction and partnerships with behavioral scientists, decision scientists and social scientists. I tend to fall more into the behaviorist camp than any mental/psychoanalytic ideas mostly for pragmatic and engineering reasons. The platform isn’t used to solve deterministic problems or predict outcomes etc. It is about interpreting signals, noticing behavioral changes and anomalies and offering up signals (that can be grouped into prescriptions and strategies) to alter behavioral patterns etc.
The app is just one little interface we deployed to test ideas with actual humans and machines and to make sure we have an understandable artifact for investor, partner and company conversations. So as for the current app it’s quite a wonderful, simple thing that allows us to multivariate test interface ideas, messaging easily. And with our nice little user base we’ve been able to have a nice convo with them and learn a lot of detailed stuff.
Read and Play
Maslo AI architecture overview: https://goo.gl/68Mab7
Signal processing demos
- here’s a few public prototypes that demonstrate a very small set of our signal processing capabilities.
- Journal analysis: https://www.wolframcloud.com/objects/1440afad-4346-4d56-9ef7-093b58b7c2f6
- Face landmarks, mesh and polygon representation: https://www.wolframcloud.com/objects/2b12e7a5-5f5e-46f5-9702-f1482d3c5ad6
This is a presentation to describe how we think about the language of the Maslo visual system (the persona). In defining a visual interface for AI, we look at tradeoffs between: variety, identity, emotion, and symbolism of the interface. It’s our opinion that flat or 2d visuals gives us the most flexibility but any of the visuals referenced in the video can be created. Happy to talk more about this for this who are curious as the video doesn’t have voice over.
Useful blogs and concepts
- Models of human computer relationships: https://medium.com/maslo/models-of-human-computer-relationships-a99a9b887b48
- How to build and grow an AI: https://medium.com/maslo/how-to-build-and-grow-an-ai-1c0906a844b8
- The shape of empathy: https://medium.com/maslo/the-shape-of-empathy-92520dcf3829
- There’s a ton more here: https://medium.com/maslo
We have ideas for lots of other interfaces and ambient computing scenarios. We’re going to tackle these as we feel there is a reason to as determined by our current AI growth. While we can object detect cats, recognize a song from sounds, play AlphaGo, paint digital portraits and everything everyone else can with all the GANs, CNNs, NLPS, knns, etc., that’s just simplistic deterministic problem solving that is highly fragile. aka it’s not that useful for what humans are going to want from Autonomous Car Drivers, AI Financial Advisors, AI Nurses, AI Teacher augmentation and everything else that is actually involved in being a human and living a better and better human life.