Should you really make an AI for it?

Jordan Aiko Deja
AIPilipinas
Published in
11 min readMar 25, 2018

That was the title of my talk that I gave during the second AI Night sponsored by Indigo Research, Launchgarage and AI Design PH. After all these shameless photos of myself (which you can skip), you’d get to see the discussion, the key points I mentioned during the said talk on the considerations before and during designing AI products.

Here’s the pub they made for me.

Did you notice how we all took advantage of the “Indigo” and “Pantone” is the color of 2018 color?

I cannot over-emphasize how much that color was seen that night. Even the tokens, those Parker Pens (thank you btw!) were colored indigo. Luckily, the ink was in black.

I mean even the venue was in the same color tones.

I like the setup though. Their projector was connected via chromecast and we just have to display wirelessly. Pz!
The rest of the venue had the same tones — Industrial with a pinch of purple. See those white boxes in the middle right? Those are fridges with bottles of beer. And yes, anyone could just grab them. This co-working space is a good place for beer and work though.

But moving on, I wanted to talk about the talk which I gave. You see, there’s this FB group called “AI Design”. Initially I had thoughts that it was a group for people who wanted to talk about Design and AI — more on the computationally creative side. It was not. It was more on the Design considerations when building AI products. Notice how I used a capital-D design instead of the usual one. It was more of a group that focuses and emphasizes the role of UX in the development of AI.

Here’s one of the only two photos taken of me during the talk. I brought around 7-friends and either they were paying attentively to my talk, or their phones were charging and if they werent charging their phones had broken cameras haha!

You see, with the AI hype, the data science buzzwords along with IoT, Blockchain, and many others, you cant help but notice how everyone is adding “AI” or “ML” or “Data Scientist” in their LinkedIn titles. But that’s not a point I would want to contest. Im here to write more about how too much people are riding the hype that they’re somehow losing the true purpose or meaning behind it.

AI Design Night 2 was entitled “Future Design” because the intended focus was on how to maintain a sustainable industry, where AI engineers design AI that is data-driven and somehow, long term.

Like how the Team at Google discusses UX + AI and Human-Centered Machine Learning, I think the goal of the AI Design group in the Philippines is to prepare an industry as an industry that creates AI solutions to solvable problems and not on trivial, expensive and almost non-existent problems.

Back to my talk. Like how I do most of my talks, I always begin it with my quote from my favorite Turing Awardee…

This guy. He’ll always be one of my idols. The best way to predict the future is to invent it. Awesome. Sauce.

Dont worry, I’m not really saying I wanna invent the future. But I just wanna share the 3 takeaways from my talk, which were somehow inspired by the words of Alan Kay himself. Maybe these words can somehow influence the future of AI.

What are these three takeaways? Well for the first time, I decided to share these takeaways which have been inside my head for the longest time. I’ve been doing a lot of reading and research and I’ve noticed a great discrepancy with the developments in the research/academe as compared to the demands of the industry and maybe this is why AI, no matter how promising, has yet to prove itself to be an ultimate game-changer. In fact, how it might be a game-changer might affect the entire future ahead. But I dont want to speak in conclusion since we’re all just enjoying the whatsoever benefits that this AI hype has been giving us.

AI kills jobs.

You’ve heard it. There were even quizzes in Facebook that lets you know within 6 minutes if your job will be replaced by an AI in the near future. While I’ve done only very limited work in AI, I think this shouldnt be the direction that AI should go. Specifically for the Philippine setting, there is now this dilemma of: Businesses want to save a lot of money because it’s far cheaper to invest in AI products than to invest in night shift premiums, health benefits and other allowances for employees (read: robots > human employees ?). And yet, when these Businesses invite AI experts so that they can learn and model their processes … with the hope of developing AI products that saves them millions, who do they interface, meet with to learn these processes, models? That’s right, the actual agents and employees who will be doomed to be replaced when they handover their practices to these AI experts. It’s true, one way or another AI developers will learn their processes and what these employees are doing is simply a form of delaying the inevitable. Not to mention, the PH already has a team of Pioneers and experts who have done a lot of great work in Natural Language Processing so developing AI agents will be a known task for them. In the UX perspective, we have yet to know how natural can these machines interact with humans when they are actually deployed into production. But with this thought, I propose my first takeaway on AI and its design:

Takeaway 1: On Costs.

Building AI products may reduce costs. But why do we need to cut costs? There’s a lot of money in BPO. We’re not in the great depression anymore. Are the savings worth the bad customer experience?

So maybe perhaps, from this first key takeaway, and as controversial as it may sound, maybe we can propose an objective to change the perspective of people towards AI. This way we can design newer ways on how human agents can interact with their artifically-intelligent counterparts in a way that does not actually threaten their work.

AI Design objective 1: Build revenue-generating AI products that are sustainable and does not kill jobs.

I got this inspiration from another academic idol, Prof. Shen of NUS HCI Lab. Like Alan Kay, he said some words that forever struck me and somehow made me realize what my future plans are career-wise. Allow me to share them to you as well.

Computers are getting smarter and smarter. Computers can (sometimes) beat humans. However, humans + computers can beat the computer alone. In HCI, it is important to study how to form the best human + computer teams (Zhao, 2017).

I think this is the reason why workers feel threatened by AI, because they were oriented that they should be. That their degrees are not future proof and that their work can be easily-replaced. It’s all about knowledge management. But what if we rewrite things in ways that does not threaten them but rather would augment and improve the quality of their work? What if we give them opportunites so that they could actually look forward to going to work and become better employees? better businessmen and women?

Instead of asking questions like “can you build a computer that will draw for me?”, we could ask questions like “can we develop a tool that will allow me to decide on my own art and let the computer do the repetitive strokes?”. There has been decades’ worth of effort on designing artificially creative computers but maybe this is the reason why none of these have been totally disruptive. If you were an artist, you would feel threatened that a computer can paint like Van Gogh. But as one of my academic mentors have asked me before when we were having this discussion on computer-aided music composition, computers already have that skill (to compose music — you name it, it can follow one’s patterns and styles) but somehow, in the end there might just be no X factor. That extra factor that gives you the wow effect or those chills that tingles the finest hairs in your body. Maybe, instead of robbing artists and musicians of their creative expression and the experiences that go along with it that they share to their viewers and listeners, we could actually empower them and make them more efficient? Maybe, instead of threatening regular workers of losing their jobs, they can be told that we can build products that can make them arrive earlier in decisions, be more efficient in their line of work generally?

Going back to the questions earlier, you somehow realizde that you need to add a user-centric approach to designing an AI. In the end, do we really want to be lazy, or do we really just want to be efficient in our work? so we could get that approval and that appreciation we’ve been working hard for?

Take away 2: On Needs.

We are in the era where we can build anything. The bigger question is what can we build that can help us? That can make us happy? Are these aligned with our needs?

Another recent fad here is that, since AI and data science became a buzz word, almost every event, hackathon, meetup has always been around those two words. Startup competitions and similar hackathons have always been on the lookout for AI topics, prolly because they’re too busy organizing such events that they cant come up with their own ideas. How can you build AI to convince people to always wash their hands? How might we develop IoT so that I could play FruitNinja and chop veggies at the same time? And at the end of the day, the industry just gets saturated with too many AI enthusiasts who have no concrete disruptive idea on how to change the way of things. There just seems to be too much noise. And with that I present another objective following the second takeaway:

AI Design objective 2: Build an AI that addresses a real human need. [Also consider] If a human can’t perform the task then neither can an AI.

In another Visiting Professor Talk, I’ve heard our undergrad students ask about the possibility of developing Jarvis-like systems and their notable interfaces. The Visiting Professor, Tony, does work on immersive spaces and collaboration. One of the students asked him, if it was possible already to develop interfaces like that we see in the Iron Man movies.

Photo of Tony Tang during his Visiting Lecture in De La Salle University last February 2018. He presented the ongoing work on collaborative spaces and immersive environments and along with those the most recent questions he and his team are trying to answer.

“We could build Jarvis[like systems] anytime. The even better question is, is it the right thing to do?”

The mention of the word right here does not entail on the ethics portion (yet… we will arrive there in a while) but more on, should you really be designing that? And this is where my title begins to make sense. Based on recent observations, I’ve encountered a lot of AI ideas that are cool but useless and there’s cool AI that are not useless and they address actual human needs.

Hologram screens are a thing. With Kinect and other similar sensors we can already develop Star Wars lightsaber-themed rhythm games. Trust me, we’ve all seen them. And we all think they’re cool, very very cool. But they’re really not that commercially-viable right? They’re not hitting the stores or crashing supermarkets. I think the hoverboard did a better job. Maybe because it doesnt really address a human need? Perhaps we could look at a different angle: a smart watch that absorbs electro magnetic waves from objects we touch and it provides a guide on how we could use them better? or how it can augment our daily routines.

If I’m not mistaken, Gierad Laput works on utilizing existing energies towards building ubiquitous and seamless interactions. By following this approach, we actually get to develop something cool, and useful. And from the perspective of the folks in the Google Design Team, we do not actually get to develop something expensive and data heavy to solve an almost trivial problem.

Machine learning won’t figure out what problems to solve. If you aren’t aligned with a human need, you’re just going to build a very powerful system to address a very small — or perhaps nonexistent — problem (Lovejoy, 2018)

The last key takeaway involves a topic that deals with a little bit of morality and some logical dilemmas/paradoxes.

Takeaway 3: On Ethics.

Does the AI value human life above all? Is the human good taken in consideration? Does it make a logical decision or a rational decision?

Ever since I was a child, I have seen every AI movie and most of them have been developed to freak people from technology. They do not always end so well — from Spielberg’s AI, Eagle Eye, Ex Machina, Transcendence, I Robot, The Terminator series, HER, y o u n a me a l l o f t h e m. The AI is always the bad guy or the abused one. I mean I think they’re all trying to send us a message here. It’s not that AI will be bad or will be out of control (and these can still be possibilities), but these movies are actually sending an even more important message.. that we should always consider building AI products that promote the human good.

AI Design objective 3: Do not simply build an AI product. Build an AI product with a valid human purpose.

There have been recent reports on some disturbing AI ideas — some can be funny, some can be disrespectful or offensive. It always boils back to the concept of AI and the ethics behind it.

The latest developments in Deep Learning already tell us that the capabilities and the growth of artificial intelligence is no longer science fiction. We can already do it. We can already build them. And going back to the earlier question of, should we be building it? We need to ask ourselves as well the challenge of, how should we be training an AI product? Sure, we can train a self-driving car to be the most peaceful driver that obeys all traffic rules, an energy-efficient one. But when it goes to the point of, choosing between saving the life of its passengers or of the people around it (say in a moment of crash), how should we train the self-driving car? Will it be loyal to its master/passenger or will it be for the greater good/lesser evil? And just as the folks in Google Design have said as well, Machine Learning and AI cannot tell you what problems to solve. Its training and intelligence depends on the capabilities that was given by its human creators. And we’re back to that Trolley Problem thought experiment all over again.

Photo gacked from KnowYourMeme.com cos really you should see all the memes under this category.

Sure we can equip the latest Recurrent Neural Networks to train that model. But like any machine learning problem, we are at the mercy of our data and hopefully, lives should not be solely at the mercy of the data used to train these intelligent agents.

And with those key takeaways, some were really disturbing and controversial, it’s still important to ask ourselves.. when we’re in that point or position or given the challenge to come up with an AI solution, we should be asking ourselves first…

Should you really make an AI for it?

--

--