Are you Ready for Tomorrow? AI and ML Business Lessons from an MIT Researcher

Avary Ettienne-Samuel
The Black Wealth Club
6 min readFeb 1, 2023

Nicholas André G Johnson is a Ph.D. candidate in Operations Research and Machine Learning at the Massachusetts Institute of Technology (MIT). Nicholas serves as part of the Black Wealth Club’s (BWC hereon) steering committee and as a BWC Board Member.

Innovation Cycles, Artificial Intelligence, and My Journey

At my core I consider myself to be a technologist and a builder. My passion for pushing the frontier of technological innovation to improve communities led me to study and conduct machine learning research at Princeton University, the Montreal Institute of Learning Algorithms, the University of Oxford, and currently at the Massachusetts Institute of Technology. Working as a machine learning engineer at Google and the D. E. Shaw Investment Group showed me how state-of-the-art developments are being leveraged in practice. I have spent significant time reflecting on how organizations can best prepare themselves to reap the benefits of technology as a result. There is a business imperative for adopting AI because it can exponentially increase business efficiency and output. But to successfully carry out this adoption, business leaders must understand how to best integrate AI into their organization, along with its implications on the future of work.

Our society is collectively adjusting to an innovation cycle characterized by the proliferation of AI, robotics, and clean tech. Organizations that are able to sustain growth and impact have historically embraced the foundational technologies of past innovation cycles, emphasizing the importance of embracing artificial intelligence to remain competitive today. Moreover, the pace of innovation is accelerating, each cycle subsequently shortening in duration, creating an urgency for artificial intelligence integration to occur quickly.

Artificial Intelligence, ML Service Providers, and the Future of Work

As the terms artificial intelligence, machine learning, and deep learning are often employed interchangeably, I will take a moment here to define the three concisely:

Artificial Intelligence (AI): Any computer system that can successfully perform tasks that humans deem to be sufficiently complex.

Machine Learning (ML): An AI system in which the computer is given data related to the task at hand and arrives at a solution method by performing operations on the given data.

Deep Learning (DL): A ML system that employs artificial neural networks, a commonly used and powerful model paradigm.

Opportunities to best leverage machine learning typically fall in one of the following categories:

  • Automating low-complexity, high-scale tasks such as moderation and data recording. These tasks can benefit from the reliability and efficiency of AI and this frees up the capacity of workers to focus on more challenging issues.
  • Data-driven decision-making through processing large amounts of data, where AI can significantly outperform human approaches. An example is Uber’s utilization of ML in connecting drivers to passengers to minimize wait time, improving user experience.
  • Actionable insights from unstructured data given large amounts of unstructured data that would otherwise not be used or be time-intensive. An example is automating text mining to consolidate product insights after conducting video user feedback interviews.

Though significant ML research continues to occur, we are very much in the deployment phase of this technology’s development cycle. Hiring in-house ML talent is difficult, particularly if one seeks individuals who have made significant contributions to ML research. However, the frictions associated with deploying ML continue to decrease due to the proliferation of AI-as-a-Service, or ML service providers. Many business problems that lend themselves to ML solutions do not require the development of models from scratch and can often be tackled by existing solutions offered by ML service providers. This technology is particularly disruptive to a company’s workforce in introducing a new set of high-value skills: a familiarity with ML models and standard ML application programming interfaces (APIs), mindfulness of quality assurance as a safeguard against adversarial attacks that produce incorrect model output, and an ability to diagnose distributional shifts that can degrade the performance of a model.

Understanding the current limitations of ML is important to best understand the business applications that most readily lend themselves to ML automation or ML augmentation:

  1. Data inefficiency: Most ML models require enormous amounts of problem-specific data to achieve state-of-the-art results that are often heralded in the media and showcased in research papers. For many use cases, it is either not possible or not economically viable to curate such a large dataset. For interested readers, this drawback can in part be addressed through transfer learning, active learning, zero-shot learning, and weak supervision.
  2. Noninterpretability: The output of most ML models cannot be readily reasoned about by non-technical decision-makers or explained to a user that is not technically trained. Such ML models are commonly referred to as black-box models. For many applications, this opacity cannot be tolerated — for example when a doctor must justify the treatment recommendation of an ML system to their patient. For interested readers, this drawback can in part be addressed through structured ML models and sparse ML models.
  3. Lack of “traditional” fairness: There is a disconnect between the common notion of unbiasedness and the statistical notion of unbiasedness. In the statistical sense, an ML model is said to be unbiased if its output is consistent with the statistical features of the data used to train it. In cases where data used to train the model does not sufficiently reflect the instances in which the model will be deployed, capturing the statistical features of a dataset is not sufficient and can lead to models that are unbiased statistically but biased in the common sense of the word (for example, by using protected attributes like race or gender to make lending decisions). For interested readers, this drawback can in part be addressed through data augmentation and model regularization.

The space of ML research is constantly progressing, and keeping in mind the pace of innovation, it is a matter of time before these common pitfalls are addressed with a consistently accepted approach.

The Role of Business Leaders

Business leaders should best leverage AI by proactively embracing the technology and by growing an ML-fluent workforce. Here are four key steps that can be taken:

  1. Embrace the horizontal opportunity to embed AI across business functions. Transform your business into an AI company by first identifying low complexity high scale tasks, data-driven decision-making problems, and opportunities to extract insights from unstructured data, then by identifying which of these problems can be streamlined using off-the-shelf AI-as-a-Service solutions.
  2. Understand which types of problems can be addressed with ML and which can’t. Not every problem can or should be solved using ML. Business leaders must develop an intuition as to when it is worth investing in curating a dataset from scratch to build an ML model, when an off-the-shelf model will suffice and when it simply is not worth the investment. Developing this intuition requires a solid grasp of the limitations of ML outlined above.
  3. Index for creativity when hiring and promoting. ML should be embraced as a tool to augment what humans can achieve. ML automation frees employees from having to complete many mundane tasks. However, ML consistently fails at displaying the type of fundamental creativity that defines human ingenuity. Businesses that can both leverage ML and cultivate this creativity in their workforce will have a sizable competitive advantage.
  4. Emphasize ML literacy in employee development. As innovation cycles continue to shorten, employees must continually reskill and upskill. Though I maintain that in the future knowing how to code will be as commonplace as knowing how to write by hand, today what is most needed is a common and well-understood vocabulary with which to discuss ML models (for both technical and non-technical stakeholders) in addition to a shared understanding of its use cases and limitations — and this understanding must continually be refined as further technological progress occurs.

Special thanks to Millian Gehrer, John Hallman, Noah Jones, Michael Li, Andini Makosinski, and Amar Shah for sharing valuable feedback while I prepared this article.

--

--