AI Is Part of Your Entire Day — And You May Not Realize It

Yasmin Alameddine
Bias In AI
Published in
8 min readJan 17, 2020

Photo of Alysha Law. All photographs and screenshots courtesy of Alysha Law.

Artificial intelligence has become a large part of our day-to-day routine. AI tells us the weather so we know what to wear in the morning, finds the t-shirt we were eyeing in store with a chat bot, and predicts our email responses before we type them out.

To see how ubiquitous AI is in our daily life and how AI works to help us, I followed 25-year-old Alysha Law, who lives in Toronto, Canada, for a day.

December 26, 10:27 a.m.

“Hey Siri, can you put on ‘Dream of You’ by Camila Cabello?” Law calls to her iPhoneX which is teetering on her bathroom sink as she washes her face.

“Now playing ‘Dream of You’ by Camila Cabello,” Siri responds.

“I don’t make playlists, so I ask Siri to play the song I’m in the mood for every morning,” explains Law, nodding along to the song.

Siri uses natural-language processing, a subset of AI mentioned in this primer, to carry out Law’s request. To recognize audio and convert statements into action, Apple engineers trained Siri to convert hours and hours of voiceswith different gender identities, ages, accents, and tonesinto text, according to Apple’s Machine Learning Journal.

But processing what action to take can be a challenge. People may use incorrect grammar or different phraseology to say the same action. In this case, Law says “put on ‘Dream of You,’” but she could have said “play ‘Dream of You’.” Apple engineers anticipated this, so they trained Siri to recognize many different phrases and group them to carry out the same action.

“I really like the convenience,” says Law about her affinity for Siri. “I like that I can multitask easily.”

10:37 a.m

“Hey Siri, what’s the outside temperature?” asks Law, rummaging through her sweaters in her closet.

“It’s currently cloudy and 1 degree centigrade in Toronto. Expect rain starting in the afternoon. Temperatures are heading up from 1 degree to 5 degrees tonight.”

Law nods, and picks out a thick cable knit sweater and scarf.

Siri once again uses natural language processing. This time, Siri recognizes Law’s command and relies on the Weather app to read Toronto’s weather forecast out loud.

10:48 a.m

Law gets in her black Mercedes ML350, puts on her seat belt and holds her iPhone up to her face to unlock it.

Law is using Apple’s Face ID. Face ID replaced Apple’s touch ID system in the latest iPhone X update. Face ID uses sensors and cameras to create a 3D map of a face by projecting 30,000 invisible dots to capture the user’s unique facial structure and then store this model, according to Apple’s support blog.

Law clicks the Google Maps app and types in “Cafe Boulud,” the restaurant where she is meeting her friend, Simran Ghoman, for brunch. Google Maps suggests three routes. She selects the fastest route, 21 minutes.

Here Google Maps is using AI to predict route times. When a user allows Google Maps’ to use their location, the app gathers the user’s car speed and location anonymously. Google Maps crowdsources other Maps users’ data points on the road to approximate traffic and road conditions and predict the best routes, according to Google’s Blog.

“I definitely worry when I use Face ID or give my location,” Law says about privacy concerns while using AI. “Giving personal information makes me a little paranoid for my security.”

Law notes that at first she resisted using AI devices: “I was scared and avoided them for a long time,” but their convenience and the ubiquity of them in her friends’ lives made for a fair trade-off. “I really like the convenience. I see my friends use them and nothing bad has happened to them yet,” Law says with a laugh.

12:47 p.m

In between bites of her goat cheese and spinach omelette, Law clicks her Gmail app and sees she has an unread email from her cousin. Law clicks reply and starts typing out a response, “Merry-.” Before she can finish her sentence Gmail suggests how to complete her sentence appears, “Christmas to you too!” She uses the suggestion, completes her sentence and sends the email.

Google engineers trained the Gmail feature, SmartCompose, with billions of common phrases and sentences. SmartCompose uses contextual clues of the words in the original e-mail and compares it to the billions of phrases it has been trained on.

Law says she uses SmartCompose “all the time,” because the recommendations saves her time and helps her if she is having writer’s block.

1:57 p.m.

After the brunch, Law sits in her car and puts her phone in her cup holder. Backing up out of her parking spot, Law says, “Hey Siri, take me home.”

Again, Siri uses natural language processing to fulfill the action. In this case, Siri translates “take me” as a Google Maps location command. The word “home” is associated with Law’s home address, which she already stored in Google Maps.

2:15 p.m

Law settles down in her gray couch in front of her television. She opens up Disney Plus, the new streaming service. She scrolls through the “Recommended For You” section.

“Frozen, Frozen 2, High School Musical, High School Musical 2…” Law reads the options out loud, “wow they really nailed me.” Finally, Law decides on a nostalgic television show from her childhood, “That’s So Raven.”

Although information on Disney+ recommendation system is not yet public, Netflix, Amazon Prime and other streaming services use a similar algorithm.

Streaming services gather data to feed into the machine learning algorithm. One way is to obtain users’ behavior data: what content they watch, how long they watch the content for, what time of day they watch it, and what they watch before and after. Another way is to gather data from the shows and movies themselves: who the targeted audience is, what genre it is, what actors play the characters, or who directs it. As well, streaming services gather data by asking users to give feedback. Netflix has a simple thumbs up or down beside the title. Some streaming services have their own in-house user research teams to fill in any gaps in the data. Netflix hires in-house and freelance employees to watch every TV show and movie and “tag” the shows with notes, according to Wired’s Netflix deep dive.

The streaming services then aggregate these data points to inform what TV shows and movies to recommend to users. The algorithm gets better at predicting content as more and more users use the service, according to Wired’s Netflix deep dive.

Law says the recommendation algorithm is how she finds “60 percent of new shows.” However, if not used correctly these algorithms can go awry: “My boyfriend and I live together and share the same account, so if he watches even a few hours of TV shows it starts recommending me his taste of shows.”

2:26 p.m

As That’s So Raven plays, Law types “Artizia.com” in her iPhone X’s browser to check out sales at the popular Canadian women’s clothing store.

“Up to 50% off?” Law exclaims, reading the big red banner on the site. She clicks the chat function at the bottom right of the screen.

“I’m wondering if you have the Rhythm dress. I saw it in your store but can’t find it online,” Law types into the chat bubble, and then puts down her phone.

Law is interacting with a chat bot. Chat bots use natural language processing to simulate seemingly organic human conversation. The bots are trained on thousands of phrases and words to process and respond correctly to the way a person types. With time chatbots learn from each interaction to improve their response accuracy and shorten response time, according to Chat Bots Magazine’s “A Beginners Guide to Chatbots.”

Two minutes later Law checks her phone, “How have they not answered me yet?”

“We will be with you shortly,” the bot responds.

“Usually they are fast. I use this chat bot because when I call the store because they never answer,” Law said.

“Hi Alysha! Thanks for reaching out to Aritzia Concierge. My name is Rebecca and I would be happy to help,” answered the bot. “What size and color are you after?”

Law responds with the size and color. After five minutes of no response, Law shakes her head and closes out of the window. “Forget it. This is taking too long.”

Aritzia declined to comment on the response time.

6:47 p.m

Law sits at Gusto 101, an Italian restaurant in downtown Toronto, with her high school friends. They start to catch up, but with a prompt from the waiter, the conversation turns to what to order.

“The mushroom pasta is a go-to,” Law assures her high-school friend Maddy Macdougal.

To convince her, Law pulls up Google Images on her iPhone, and types “Gusto 101 p-.” Before Law can finish typing the word “pasta,” Google has suggested four possible searches, including the intended“Gusto 101 mushroom pasta.” Macdougal scrolls through the pictures of the pasta, and orders it for herself.

Here, Google’s autocomplete feature is at work. Similar to Google’s SmartCompose, Autocomplete is a form of AI that suggests phrases after a user has offered a word or phrase. Law used the phrase “gusto 101 p-.” Google uses context clues, previous Google Searches, and location data to predict what Law was attempting to search for.

11:27 p.m

After a night out, Law is back home getting ready for bed. She is in her bathroom, washing her face, when she calls to her iPhone, which is on her bed in the other room. “Hey Siri, set my alarm for 8 a.m.”

Siri confirms the command: “Alarm set for 8 a.m.”

--

--