Summary of Research for Smirk

Google has defined the internet since their inception 17 years ago. The experience of searching through pages of impersonal results for answers no longer meets the expectations of the users. This is further emphasized by the shift to smaller screens. We are on the cusp of a major shift in how we interact with content online, and by extension the offline world. Smirk aims to be at the forefront of that movement.

Smirk started with an exploration into the future of search. We determined the the future of search has three main components:

1) Search will become proactive. The system will understand the users context, and thus will give the user results before they even have to search.

2) Search will be personalized. Data on the user will allow search engines to provide relevant results to each individual user.

3) Search will be streamlined. Currently answers to search queries are segregated throughout apps and the web. There will be a movement towards having one central portal to the web.

This last point is particularly important, as we realized we have to build a piece of that system. Google used to be the entry point for the web, and was once equated with being “the internet.” But this is changing, and the tech giants are moving fast to make their platforms the middleman between the user and the rest of the internet.

Since apps have become mainstream the number of search queries on Google is declining. For the first time in history Google searches on mobile outnumber the number of searches on Desktop, but the average user only uses the Google search engine 2 times a day. Search is Google’s core business, and thus they are fighting to hold their ground. Their newest product, GoogleNow, is a step towards what the future of search looks like. The platform is notification based, surfacing relevant information from your phone and the web a the right time.

Google recently replaced the head of their search department with the head of Artificial Intelligence. This is significant because it is proof that the future of content discovery relies on AI. Recent advances in this technology will allow us to build systems that seem as though they are out of science fiction novels. Machine learning, a branch of AI, gives us the ability to understand users in an unprecedented way. This will be the basis for giving contextually relevant personalized recommendations. Google Now has already started to do this. For example, if it’s the end of the day, Google Now will send me a card telling me how long it’ll take me to get home from wherever I am.

Data is at the core of these systems. While AI might feel like magic, it is really just algorithms that are processing huge amounts of data and making inferences from that data. This means that the companies with the most data are at an advantage. For example, Google has a history of everything you have ever searched for, and a full history of your location. Facebook knows who you are as a person, and is able to put that into context with your social group and demographic. And even Uber knows where you’ve been and your movement behavior. This data is useful to companies because they are able to use it for advertising revenue. It also gives them insight into what products and content will be successful in the future. For example, Netflix has developed algorithms based on users watching history that are so advanced they can predict how much movies will make before they’re even made.

As artificial intelligence becomes a core component of day to day technology, data becomes more valuable than currency. The way we’re designing products has changed. Alexandra Totsi, a Professor at Parsons The New School for Design, says “Products are like magic tricks. They wave their hands and say ‘look over here!’ while they take your data.” Products are no longer being designed for the consumer, the consumer has become the product. Features are being built into products that seem as though they are offering the user a better experience, but in reality are being used to gain more data. For example, the “like” button on Facebook seems like it was created to give users an easier way to interact with their friends posts. In reality, this button gave the engineers developing the newsfeed algorithms more information on what content each individual user is interested in. This same theory can be applied to many products we interact with on a day to day basis. We are at a point now where entire companies are formed knowing their value is in the data they collect from users.

Smirk is a good example of this. Product decisions were made in order to ensure we were collecting high quality data from our users that could then be fed into emerging technologies. For example, when deciding what the initial focus of Smirk should be, we settled on restaurant and bar recommendations. While this might seem boring, location based data is very valuable. Companies have accurately mapped out users habits online, but an understanding of behavior in the offline world is still partially unknown. The tech giants have invested in acquiring this data. In 2014 Apple acquired Spotsetter (2), a social search engine for places. In 2013 Google acquired Waze for a reported $1.1B, with the intent to use the traffic app’s data to give a social boost to Google Maps (1).

Notice that both Waze and Spotsetter have intertwined social data with location. Having an understanding of what demographics are interacting with what places is extremely valuable for advertising. To illustrate this point let’s contrast existing digital marketing with the location based marketing of the future. If a successful blog can prove that girls between the ages of 13–16 are visiting the site at 3:30 pm everyday, the blog can use that data to sell ad space. Perhaps a smoothie chain puts up an ad, bringing some of the girls to their website. Now imagine if we had deeper information about the girls behavior. With social and location data, we would know that one of the girls is named Chloe and she passes the smoothie shop every day on the way home from school. This becomes more appealing to advertisers because then they can offer even more targeted ads. They could send a notification through an app to Chloe’s phone as they were walking by the smoothie shop, saying “come in for an after class special.” Now instead of sending Chloe to the stores website, we’re bringing her into the actual store. This use of data could be taken a step further by integrating social data into the actual ad, such as saying that one of the Chloe’s friends likes that smoothie spot. By adding the peer stamp of approval, a sense of trust is created between the store and the potential customer.

It is worth investing in location based social data because it will become a core component of future technologies. Google is investing in Google Glass, which will change how we interact with the physical world. But the products success will depend on location based data insights. Imagine if Chloe was walking down the street, and a notification popped up in her glasses. In order for this to not be annoying, it would need to know that Chloe would like this smoothie place, an insight made on social and location data. The success of wearable devices depends on this data, making it especially valuable to the companies developing these products.

So where does Smirk fit into all of this? In the last 10 years we saw a focus on tracking online data, and we are anticipating that with the emergence of wearables we will see that focus shift to physical places and interactions. We hope to be at the forefront of that movement. Smirk aims to understand social behavior in the context of physical location. By tracking location and interactions on the platform, we can start to gather data about where people are going. As we grow we will be able to give recommendations to the user based on how many of their friends went to a certain spot — giving us credibility that platforms like Yelp don’t have. We will develop our own database of where young people like to hang out.

This will give us advertising power. As we can see from the explosive use of ad blockers, digital advertising is broken. Ads have become white noise on the internet, and people are attempting block them out, both literally and figuratively. Think of the last time you watched a TV show. Can you name any of the commercials? This is calling for a change in the way we promote products. The key to this is offering contextually relevant offers. A Forbes executive at the Mobile Marketing Association’s Mobile Location Leadership Forum predicted that location-based ads predicted that location-based ads will eventually make up 40 percent of ad spending.

We aim to stand at the intersection of the user, the offline world and the scattered information online. We started with an SMS based system, hoping to replicate the users behavior of texting a friend for a recommendation. Through user testing we discovered the process took too long, as it would take multiple steps to give the user all the necessary information they needed about a place. At a minimum they would want to see the menu, phone number, pictures and location, which took multiple text messages. It was easier for the user to just google it. There were also issues with how we understood subjective language. For example, if a user asks “what’s a cool place to eat tonight?” It’s hard for Smirk to answer that because everyone has a different definition for what is “cool.”

To solve this issue we created a customized keyboard. The keyboard allows the user to select a “guru” and a “vibe.” The “guru” was a celebrity figure who would give you recommendations based on their tastes. Our argument is that celebrities are representative of different social groups in our culture, and thus we could mimic a sense of personalization. The “vibe” feature was designed to address peoples different moods. For example, if the user selects “chill” as her vibe, Smirk would know what type of mood she was in. We also integrated a “cards” feature, where we would send an image with information about the recommendation. This would allow us to get information to the user in a succinct way. The main issue with this iteration is that the on-boarding process was too complicated, resulting in a low conversion rate.

We adopted this iteration into an app. Developing a native platform made the on-boarding process easier and gave us more flexibility with the user interactions. We dropped the “guru” feature, worried it would come off as a gimmick or we would get sued for using celebrities without their permission. Instead we focused on improving how the user would make requests. The “vibe” feature evolved into allowing the user to make requests using gifs pulled from Giphy. For example, a user could select a gif of Audrey Hepburn to describe they want something classy.

In an app the cards could now be interactive, providing us with another way to learn more about the users likes and dislikes. In order to teach the algorithms, we had to get specific feedback about our recommendations. We experimented with a number of interactions for the cards, settling on swiping left for “not now,” swiping right for “been there,” and double tap to “add to favorites.” We could then use this information to provide better recommendations in the future.

The journal article “Adaptive Mobile News Personalization Using Social Networks” did a study on how to improve recommendation systems. Through fieldwork their research team discovered that personalizing automatically using an algorithm is more successful than allowing the customer to self-customize. We already see this in products like spotify, who forgo traditional personalization methods, such as having users answer questions during the on boarding process. Studying a users behavior is more accurate.

This is problematic because of the technical issue of “cold start.” This means the computer can’t start sorting people without enough initial data. A possible solution to this is using Bayesian network algorithms. The journal article A Proactive Personalised Mobile Recommendation System Using Analytic Hierarchy Process and Bayesian Network states “… a Bayesian network algorithm is applied to solve the cold-start problem in recommendation systems.”

Traditionally there are two methods used to calculate user’s interests. One, the content based method (CBM). This is based on the user’s own history. Two, the collaborative filtering method (CFM). This is based on group history. Working on their own, both of these methods have major drawbacks. But used together with a Bayesian network, we can overcome the issue of cold start. A new user on the platform can be categorized based on their group profile data. This solves the data sparsity issue because a new user can receive relatively accurate recommendations right away. As the user begins to interact with the product, the system will start using CBM, making recommendations off of the user’s own history. This will provide recommendations that are more suitable for that particular user. Using a Bayesian Network has other benefits, such as providing real time predictions because Bayesian Networks require smaller amounts of memory and provide faster computation than other techniques.

The social profiles we need to build are made of two parts, the “global profile” and the “local profile”. The global profile will be built from all of the users publically available information, such as data we can collect from APIs. We are building archetypes, ie. representations of different stereotypes, we can use to cluster people into. To determine which archetypes a new user is built up of, we would utilize the user’s social data. This would be the CFM, ie. categorizing users based on groups. Their local social profile forms as we get to know our user on a deeper level. This would be built from their interactions with the platform. We will learn from the users history, thus this would be the CBM method. As the user continues to use the platform, we can adjust the percentages of each archetype the user is made up of. We would use a Bayesian network so that the personalization system continually changes to the users needs, making it an adaptive system.

We ran into two major problems during the development of this project. One, we social networks keep their API’s extremely limited, making it difficult to successfully build the baseline user profile. We had some success in scraping users Youtube history to get a better understanding of their archetype, but new users were often reluctant to sign up with Google. We tested telling users that we were going to use their watch history to serve them better recommendations, but this caused them to be even more wary of giving us access to their data.

The second major problem arose when we discovered that the average user downloads an average of zero apps a month. Most people spend 84 percent of their time in just 5 apps. New studies on user behavior show that people are moving away from interacting with individual experiences (ie. an platform that exists solely as an app) and are instead favoring systems (ie. Facebook, where apps are just an extension of a bigger network). We realized that in order to get adopted we needed to become a part of one of these systems. Facebook recently released it’s chatbot platform, and we decided to move Smirk onto Facebook Messenger.

There are pros and cons to this decision. It will be easier to bring more people onto the platform, but we are limited in user interactions, and by extension how well we can understand our user. While we have to offer a level of personalization in order to give the users the best recommendations possible to keep them on the platform, we don’t need to know everything about them. Our value is not in the social data we collect about our users, as we will never be able to understand them on the same level as social networking giants like Facebook, or even Pinterest. We specifically need to know what the user is looking for, when they’re looking for it, and what they like. We can monitor when and what requests are made, which will give us insight into what people’s intent is relative to time. Interactions such as describing the vibe of a place with gifs will give us insight into what type of experience they are looking for, and will serve to mimic a level of personalization that is otherwise impossible to get through the Facebook platform. We will create a database of this information, and one day use it to get revenue from advertising.

The more important downside to Facebook is that we can no longer get location based data. This means we cannot build a database from where our users location history. For the initial product, we need to find alternatives to building this database, such as sourcing from social influencers and other recommendation sites such as Zagat. We hope to use Facebook to build a user base, and eventually move onto another platform where we can track their location.

These limitations are probably part of Facebook’s plan. They state in their policy that they are able to “analyze your app, website, content, and data for any purpose, including commercial.” Facebook is developing M, the ultimate virtual assistant. The data collected by the other chatbots will feed into M’s database, allowing Facebook to develop the ultimate virtual assistant. It will become the go-to destination for mobile discovery. David Marcus, vice president of messaging products at Facebook, says he “hopes to make up for that by creating a virtual assistant so powerful, it’s the first stop for anyone looking to do or buy anything.” Facebook is only one of the players in this field. Apple is developing Siri, Google is developing Google Now. Slack recently invested 80 million dollars into bot development.

Chatbots are the future of apps. Tech giants have been putting a lot of resources towards this new way of communicating with the web. They are using 3rd party bots as a way to become users portal to the entire internet. In the case of Facebook, they are creating an environment where a user can forgo searching for something on Google and just ask a bot to directly answer their question. Thousands of bots are being developed right now, but those will eventually be aggregated into a few main bots.

AI powered personal assistants will become the foundation of many future technologies. Uber might seem like just a ride service, but we can speculate that they are not oblivious to the fact that they are collecting location based social data about it’s users. They know who their users are because they make you sign up through Facebook. Uber always know your location, partially because you order Ubers but also because in order for the app to work you have your location data turned on. They may claim to only use your data for “business purposes,” such as being able to tell their drivers what places are more likely to be busy on a Saturday night. But in the future there is a lot of potential for how they use this data to expand their company. Imagine getting into an Uber and being able to say “Take me to the next bar” and it would know based on a wealth of data exactly where you would want to go.

Or perhaps you get into a Google driverless car and it brings you on all your errands, and because it has access to your calendar, email and Google maps you don’t even have to tell it where to go. Or maybe you come home at night to go onto your Oculus VR headset, and M has prepared a list of videos to watch, all of which are relevant and interesting to you. You could be walking down the street wearing Google Glass, and Google would know that since it’s around 3 pm you probably need a coffee, and it could direct you to a spot it knows you would like and isn’t too far away so you could make it to your destination on time. While this sounds like a convenient future, there could also be negative implications of this level of AI becoming mainstream.

When we think of how Artificial Intelligence will affect our future, we typically think of scenes from science fiction films of robots ending humanity. The threat of AI in the near future will be a lot more subtle. The bar the Uber drives you to may have paid to be recommended by M. While trying to hold your attention the VR videos you’re watching might exclude certain topics, causing you to know more about the Kardashians than about politics. Google Glass will let Google know everywhere you are, at all times, causing you to lose your sense of privacy. Convenience may come at the cost of freedom.

During the development of Smirk I became very interested in what a data driven future could look like. After seeing how AI products are developed from a business point of view, I feel strongly we must raise awareness about the possible negative implications. An ethical protocol is being developed for the applications of emerging AI technology. We see industry leaders such as Elon Musk and Sam Altman calling for regulations. The general public are being left out of that conversation because of lack of accessible information. My aim is to change how we talk about artificial intelligence by making the potential issues more approachable for a broader audience. I am currently researching how the tech world predicts AI will affect us. I am specifically interested in how bias in data impacts disadvantaged communities, and how applications of machine learning can be used to control public thought and opinion. I want to make these theories accessible to more people by putting them into different mediums, such as digital games, short stories and interactive comic books. My research will be used to facilitate a conversation with the general public about the potential ethical implications of AI. I want to hear their solutions to proposed problems, both practical and fantastical. I hope to publish this research to create a more open dialogue. I believe if we change how we talk about AI, it will change the conversation.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.