05 reflect/critique

My own understanding of the technicalities and inner workings of AI is not particularly robust, so I’m working off mostly surface level knowledge and assumptions.

The future I’ve suggested doesn’t feel particularly far away technologically speaking; with enough data you can train a machine to recognize and replicate certain speech patterns. Huge strides are being made in natural language processing, but it might be a bit of time before it becomes more or less fool proof.

There are these technical limitations that will either allow or prevent this “Life After Death” future to materialize, but there are also the ethical and moral arguments to wrestle with, a space that designers may find more interesting. As a medium, bots aren’t particularly out of reach; it isn’t difficult to set up a simple one for Twitter or Slack, but they can feel like really complex things that are out of reach, which is why it’s so much easier to work with them through a speculative or futuring lens.

It feels a bit useless to work against a future that contains AI’s. We’re already living in it and it would be foolish to write off conversational chat bots as just a trend. They are trendy, and we’re just sitting at the precipice of the Peak of Inflated Expectations, ready to fall into the Trough of Disillusionment at any moment.

One of the biggest challenges is to consider how to design interactions. Conversing with AI is rather awkward ritual, and often requires the human to work around the peculiarities of talking to a robot. Should it be seamless or should users be okay with the fact that they are in fact interacting with a machine? What forms will AI take, whether it’s physical like Alexa or digital like Siri. Many already interact with AI’s on a daily basis, but would people become uncomfortable once it takes on a less abstract or less cylindrical form? What about a human face? What is artificial intelligence’s identity? Does it need to have a particular age, gender? Does it even need to be human? Does it even need an identity? Should AI be able to stray from its prescribed script?

I had been primarily thinking about AI as the replacement of an individual after they passed away and not as something that should be active while it’s “original” version is still alive. Could they become a sort of super assistant, attending all the meetings you don’t want to attend and standing in line at the DMV when you need to get your license renewed. Who influences who’s development? Could they make decisions for us? Would botnet attacks become even more malicious or larger in scale?

I can’t say firmly if this is a future I do or don’t want to be in. More than anything I’ve accepted AI as a fact of life, that it’s here and not real point in trying to deny that. There truly is a lot of potential for good as well as a slew of ethical and moral dilemmas, and it’s interesting to see how different designers are approaching it as a medium.