Your Virtual Self
Published in

Your Virtual Self

Collaborative painting by the friends of Maslo

The Algorithms Wont Save Us

A Human approach to growing computational empathy

I’m a bit puzzled. I lean back in my chair and go over the past hour in my head. We just concluded a meeting with a large organization who had an interest in growing empathy in technology. It all started with an email I sent weeks earlier.

I really dig what you’re working on. We share your views that technology needs to adopt more empathy and evolve from the transaction based models we have today because, as we all know, it’s not healthy or even remotely human. Much of our work at Maslo takes inspiration from psychology, like the theories of Abraham Maslow and his research on human motivation. And the work of Jean Piaget and his work in cognitive development in children. Along with the notion that all of this should be fun. Learning is fun and play.

On the technical side, one of the techniques we are using is called signal processing. In our opinion, much of what people define as ‘AI’ today focuses too much on computational linguistics. Language is valuable, but really only 1/10th of what we use to communicate. There’s also the notion of AI over promising... you can see this with the recent press around self-driving cars. That is to say that one can’t shortcut learning. Learning is a process within itself and the role of AI is not to solve our problems, but enhance our own creativity and awareness.

Their reaction to the Maslo approach indicated we may have a common goal “This looks super interesting and mega promising.” But at this point, several conversations later, I begin to see their approach may be disingenuous to the overall goal of making technology that experiences the human condition and helps us change in response.

I lean forward and look towards my desk. My phone begins buzzing wildly and the screen shows a familiar face and name: “Russell Foltz-Smith”. He was with me on the call and no doubt came to the same conclusion as I and wants to recap the last hour.

A smallish theory of human-like AI

It is clear to us there is a very large and wrong assumption in the world around what is needed to create ‘empathetic technology’. Empathy is not an emotion but an awareness of shared consequences, outcomes, and perspective. To be a companion literally means to share in the experience of another. Dogs do this very well. Empathy, from a learning perspective, is not some human-only trait but an evolutionary response towards developing complexity. Complexity is valuable because is counters one of the basic laws in any thermodynamic system: entropy. To combat entropy (decay), you add more to the system. So when it comes to developing software that is computationally efficient in sensing the world, we arrived at set of several simple theories.

The first of these is having an expectation of non-human things, that can be made to do human things, without going through human-like training. We think this way because there are some behaviors non-humans do that we do. A computer does count as we do. A dog does notice sounds and people similar to us. So we misattribute how this behavior comes about — we think it’s a built in property and ability of the non human thing — and we come to think we have those built in abilities too. We forget just how much training we have to do even for basic things like speaking sentences, counting past two, remembering things out of our immediate surroundings, and so on. To sum this up more simply: you cannot shortcut the learning process.

Feeding the algorithms with test data is the wrong approach

If you took a videotape of things happening out the window, it would be of no interest to physical scientists. They are interested in what happens under the exotic conditions of highly contrived experiments, maybe something not even happening in nature. — Chomsky // An interview on minimalism

The vast majority of technology companies today fail to acknowledge that test data or fake data leads to bad assumptions and bias in modeling that defeats the entire purpose of a modeling exercise in the first place. Let me use an analogy.

Say we were to develop an algorithm for a self driving car on closed course. It’s a success… the car can drive the course at a safe speed and complete the course… then what? Let’s put a car on the road to drive! But wait a minute… there are now very complex and dynamic humans in the picture. What does the algorithm do when a kid is crossing the street? What does it do when the driver keeps tapping the breaks due to a lack of trust?

The question now becomes: how does that first algorithm help a car show acknowledgment of the pedestrian on the road? How does that algorithm make the person in the driver seat develop a sense of trust? Because we changed the picture, that initial algorithm we developed is useless.

The key point and let me make this very clear: it’s not about algorithms, those are easy, the value is the way in which the algorithms interact with real humans in the real world to affect behavior change in a practical application. Creating a shortcut does not get you there faster, it avoids the initial problem entirely.

I’m in my apartment and the buzzing continues. I answer the phone. “YO! These people have no idea what they’re doing… we could help them but they have to roll back a lot of their thinking.” Russ exclaims. “Part of the problem is their theories are engrained in their need for self-preservation… when you spend your entire life studying computational linguistics, it won’t be so easy to walk back those ideas.”

Learning and knowledge is simply increasing the fidelity of that which a system mirrors. A more faceted mirror, if you will.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store