LXAI: MIT AI Conference 2018 Recap

Sebastian Anaya
LatinXinAI
Published in
7 min readMay 29, 2018
(Photocred: Jesse Martinez)

Last month I had the privilege of attending the Future of Industries, Industries of Future MIT AI conference hosted by the MIT Club of Northern California. For two days, more than 100 entrepreneurs and researchers shared insights on state of Artificial Intelligence and the industry. I attended the second day of the conference and I want to share some highlights and takeaways.

Highlights

In the keynote, one of the items that stood out to me was that Aparna Chennapragada, VP of Product for AR and VR at Google she spoke about the transition towards ‘Immersive Computing’ in which AI === UI. I believe that this is true. We are continuing to see AI be applied to a wide variety of systems with predictive intelligence in order to provide the best User Experience.

Many people are familiar with Google’s line of products, Aparna gave an overview of AI in Google Products, some examples included better thumbnails and recommendations for Youtube, smart reply in Gmail, and quick access in Google Drive.

AI still has a long way to go…

(Photocred: Dr Heidi Forbes Öste)

As part of the “Spanning the AI Spectrum” session, Richard Rabbat, CEO of Gfycat (pronounced “jiffy-cat”), a website dedicated to hosting high quality GIFs i.e. an animated image file, mentioned that AI still has a long way to go. In image recognition, AI still has trouble distinguishing closely related images such as a chihuahua and a blueberry muffin pictured above.

“AI can tell us how the world is now, but not how it should be”

One of the challenges AI faces, which Richard touched upon is bias. He emphasized the fact that AI is only as good as the biases we put into it. Sharing that, “if not careful we can train an AI to have all of the biases we have.”

In a specific instance at Gfycat, while using AI to identify content in user videos, an analysis of KPOP videos identified the same person repeatedly despite having many different individuals present. This inability to distinguish between Korean individuals shows how errors and bias can be perpetrated through lack of training data. There is still a lot of work that needs to be done to improve accuracy and reduce biases in AI.

“Does AI solve your focus/problem? AI needs to be thought about in a nuanced way” — Richard Rabbat

“Solving the bias in AI problem is a great opportunity…”

As mentioned by Joanne Chen of Foundation Capital, ‘this is a great startup opportunity to be able to find a solution on how to solve bias. We invest in companies that have a compelling solution to a real problem — regardless of the underlying tech, AI or not”.

Some of the companies that have a compelling solution and which Foundation Capital has invested in are Guardian Analytics and Trufa. Guardian Analytics provides omni-channel fraud prevention solutions to protect online, mobile, wire, and ACH transactions. Trufa has employed predictive data analytics to help companies unlock liquidity. Other companies that have received investment from Foundation Capital include Netflix, Pocket, and Lending club.

Been Kim, Research Scientist at Google Brain, discussed the origin of bias, stemming from the limitations of the data set, algorithm and metrics used. An example she shared is bias in Inceptionv3, a data model used for image classification, where Asian faces were recognized as ping pong balls. This is due to the amount of images shown with Asians as skilled table ping pong players.

Jaime Teevan of Microsoft — shared her success with “Microwork” — breaking down tasks into small pieces. Microwork can be applied to the task of writing, take this blog post for example. If you give yourself the task of writing down a sentence to add while standing in line at a coffee shop or commuting to and from work via BART, an entire post can be easily completed on the go on your phone. With this idea, applying Microwork to AI could drive enterprise productivity and efficiency one piece at a time.

If you would like to learn more about Jaime’s take on productivity please check out her publication, “Future of Microwork”.

Rory Driscoll, Partner at Scale VP and Bruce Welty of Locus Robotics shared their experience and vision for using robotics to help scale warehouse work.

Rory saw three unique items make Locus Robotics worth the investment:

  1. The Market
  2. Competition with Amazon
  3. The team’s strong background

Locus Robotics also showcased two robots as a demo, who specialized in sorting different boxes. Funnily enough, a waiter passed by me and mentioned “they are coming to take my job.”

Anneka Gupta, co-CEO at LiveRamp shared that when it comes to AI, it is important we start informing regulators such as local and federal government with technical knowledge to create the right policies due to the forthcoming innovations in technology. A great example of this is when Mark Zuckerberg had to explain to senators various basic technology concepts while being questioned about the Cambridge Analytica scandal.

Getty Images

Some of the questions asked to Mark Zuckerberg can be considered as basic concepts to many. If government leaders have a hard time understanding Facebook, imagine trying to understand the technology behind Artificial Intelligence?

“If someone wants their data deleted, does the model need to be retrained?”

A question posed that stood out to me was “If someone wants their data deleted, does the model need to be retrained?”. This question makes me wonder…

  • If a model were to be retrained with the removal of the data, would that decrease the quality of the model?
  • Is it ethical to keep and use someone’s data on a model even though it is anonymized?

A possibility for this is using an opt-in model, which many apps are employing during user registration. However, opt-in fatigue exists where customers/users may opt-out from sharing their data based on various opt-in options. Especially considering the new rules implemented by the GDPR for EU citizens.

There is a need to connect the government and innovation in order to be proactive with policies for future solutions.

“AI is augmentation for humans, like Tony Stark’s Iron Man Suit “— Danielle, Partner at Google Empathy Lab.

At Google Empathy Lab, Danielle Krettek focuses on finding ways to bring humanity into Artificial Intelligence to ensure that AI is ethical and is able to show empathy with its users.

Throughout the years, Artificial Intelligence has increasingly started to take form through voice. In order to deal with this shift, ‘AI assistants will have to become more harmonious with people and intelligent about emotion’, she states. With AI assistants such as Google Home, they are able to augment human capability by saving time.

Personal Takeaways

As a Latino, I always hope to see others from my community involved in this area. At the conference, I did not see many people of color/Latinx individuals present. This may have to do with the educational pipeline issue. At MIT there is only a 14% Hispanic/Latino admittance rate as well as Latinos only account for 6.3% of the High Tech workforce (EEOC).

However, one Latino professional I met, Fernando Espinosa, partner of Arena Analytics in Mexico City, is also an MIT alumni with a Dual-Masters in Systems Engineering and Business. His consulting firm brings professionals with extensive management consulting experience and with proven design and implementations of machine learning, analytics and optimization models to help their clients use their data to compete better.

Anna Khan and Laura Gomez (Photocred: Liz Bradley)

In addition to attendees, the conference did not feature many Latinx speakers, Laura Gomez was a speaker for a session called “Humans + Machines”. Currently she is the CEO and Founder of Atipica, where they are “building the world’s first Inclusive AI for the talent life cycle.” Her company helps companies foster inclusion through internal data and their proprietary algorithms to hire candidates while eliminating bias.

I was happy to have attended this event. It provided me insight for what’s to come in the AI space and proved that there are still lots of obstacles to overcome before we can perfect it. The most pressing obstacle is bias. This is due to imperfections with the data set, the builder of the algorithm and strategic direction of the AI product. It is important that AI is built with a diverse group of engineers and strategists to eliminate any bias.

LatinX in AI Coalition Mission:

Creating Harmony Between AI and the Latinx Community

  • Increase representation of Latinx in Artificial Intelligence
  • Improve access to education and resources in AI engineering to latinx community
  • Improve awareness of the long and short term effects of artificial intelligence technology on the Latinx community
  • Increase communication between AI companies, engineers, researchers and the Latinx community
  • Ensure transparency and accuracy of latinx culture and voice in data representation

Do you identify as Latinx and are working in artificial intelligence or know someone who is Latinx and is working in artificial intelligence?

Add to our directory: http://bit.ly/LatinXinAI-Directory-Form

Check out our open source website: http://www.latinxinai.org/

If you enjoyed reading this, you can contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

--

--

Sebastian Anaya
LatinXinAI

Salesforce Analytics Champion | Consultant @ Accenture | Co-Founder at Fuerza Ventures