The author (left) with other young alumni of St. Lawrence University. Photo credit to St. Lawrence University’s Career Center.
After two cars, three planes, and 800 miles, I finally made it back to my alma mater, St. Lawrence University. Stepping back on campus was nostalgic; St. Lawrence, and all of its faculty and staff, had a tremendous influence in the most formative years of my life.
For the first time since graduation, I would be stepping on campus as a speaker, not a student. At the beginning of the week, I gave two guest lectures about the role of financial institutions in the technology sector and participated in a career panel for undergraduate students.
Across all of the talks and panels, the most common question I got was, “As a non-software engineer/data scientist, how did you land a job in Artificial Intelligence?” It comes down to curiosity and your desire to learn (core principles of Infinia ML’s culture). As a Product Manager at Infinia ML, my job is to help clients integrate and deploy machine learning solutions into existing workflows and software applications. However, I’m not writing code on a daily basis; I act as a translator between business leaders, data scientists, and software engineers to help deliver tangible business results through people and technology. Being a “traffic director” means that my day to day job has a lot of variety. One day, I could be learning the basics of machine learning research; the next day, I’m drawing workflow diagrams for clients who want to deploy Infinia ML’s algorithms. Whether I’m working with technical folks or business owners, the key communicating across teams is to “speak their language” and phrase problems and solutions in a way each team can understand.
By nature, machine learning is a highly technical topic. ML projects require efforts from several analytically oriented teams spanning Data Science, Engineering, and DevOps (just to name a few). Experts in these fields are hard to come by; AI stars can command “big salaries similar to those fetched by professional athletes.” Although Data Scientists and Software Engineers typically steal the show (and for good reason), there are many opportunities for less technical folks to leave their mark on the machine learning world.
#1: Making Connections Across Contexts
How much do you think a machine learning algorithm is worth? $10,000? $100,000? $1 million?
I argue $0.01. Exactly, $0.01.
The explosive growth of machine learning is fueled by the open source community. The upside is that experts make their knowledge freely available. The downside is that algorithms are generally commoditized. Thus, the challenge isn’t (necessarily) deriving the world’s best algorithm; it’s finding the world’s best way to use it.
At Infinia ML, we find that our most successful projects connect technical talent with industry experts. Industry practitioners bring the subject matter expertise required to deliver lasting value; Data Scientists and Software Engineers leverage advances in technology to make visions a reality. The future of machine learning rests on the shoulders of those who find opportunities to improve lives and create value through technology. Embracing change and making novel connections between technologies, industries, and people is the key to success.
#2: Communication and Presentation Skills
Translating concepts from highly technical algorithms into “kitchen English” is a talent in its own right. You don’t have to derive algorithms or write code, but you’ll need to know what questions to ask, and the pros and cons of decisions made by Data Scientists and Software Engineers.
As machine learning engulfs the technology landscape, “technical translators” will become increasingly important. We’ll need people to bridge the gap between the C-Suite, managers, and technical teams. Machine Learning isn’t magic; like any other investment, business leaders must evaluate the opportunity cost and optimize accordingly. That’s hard to do without a shared understanding between the business and technology teams.
#3: Empathy and Leadership
Leadership in the age of artificial intelligence can be summarized by Uncle Ben from Spiderman, “With great power, comes great responsibility.” As algorithms gain more autonomy and make decisions on behalf of governments and corporations, we’ll need people to hold those who manage and deploy algorithms accountable for their actions. We’ll need leaders that value ethics, data security, and the welfare of those affected by decisions made by artificial intelligence. Most importantly, we’ll need executives and employees who aren’t afraid to speak up in the event of bias or injustice.
For example, let’s consider an algorithm that approves or rejects mortgage applications. To train this type of algorithm, banks use historical loan and applicant profile data to make predictions on the creditworthiness of future applicants. If applicants were historically denied loans for biased reasons, then the data will reflect that inequality. If data scientists aren’t careful, the model that learns from said data would then also reflect the bias. Leaders must proactively mitigate this type of risk throughout the entire machine learning development cycle to avoid negative societal outcomes and headline risk.
As I left campus, I reflected upon my experience as a student of the Liberal Arts. Although I didn’t learn how to write code or prepare financial statements, I learned “how to learn” and adapt to technological change. It was amazing to see a generation of students who are excited about the future and passionate about leading with integrity. Whatever their major, they’ll be shaping the future of machine learning and beyond.