Ari Kamlani of Beyond Limits On The Future Of Artificial Intelligence
An Interview With Tyler Gallagher
The cost of being wrong or taking questionable actions are sometimes given the same “weight” as a normal action, essentially treating it all the same. This worries me because humans have a very conscious side of the brain, where they put a lot of thought on the implications before acting. However, even in “narrow AI” basic cases, e.g., even around basic mechanisms of detection and aiding what to do next, we don’t instrument this same thought process in the underlying planning via machine intelligence.
Ari Kamlani is an experienced principal AI & ML Leader and accomplished innovative technology professional skilled in driving new strategic business initiatives, delivering insights for data-informed decision making, and delivering scalable machine learning experiments and solutions. Ari has a BS in Electrical Engineering (EE) from Lehigh University and roughly 20 years of experience in Embedded Systems, Wireless Technology, IoT, Sports Media, and Financial Domains. Prior to Beyond Limits, he was a Data Scientist with JP Morgan Chase in their Digital Intelligence group and had a number of Independent Consulting roles within AI Research and Innovation division firms.
Thank you so much for joining us in this interview series! Can you share with us the ‘backstory” of how you decided to pursue this career path in AI?
My journey has been varied, spanning a few different types of organizations, industry domains, and technologies, ranging from R&D innovation and incubation teams, enablement groups, within the Office of the CTO (OCTO), product design divisions, venture capital accelerators, professional services, and many more. My experience is mashup of technical expertise, strategy & advisory, clientele engagement, and narrative storytelling.
Many years ago, when I was initially designing embedded systems and wireless technology, I decided to pivot from constructing the underlying technologies that enable sensing and generation of data to shift towards being a consumer of that data to drive more machine intelligent decisions via AI. Some of my early projects in this area were around intelligent device “boot-up”, NASCAR Truck broadcast media localization and tracking, and low-cost high-performance localization tracking in ski-resorts and stadiums. So, you can say I started to shift what part of the AI Technology Stack I focused my attention on in the value chain. A large part being because linked to my interests shifting.
At Beyond Limits, I’m part of the technology team that sits under the Office of the CTO (OCTO) umbrella, operating at the cross-intersection of AI strategy, research, and architecture — further communicating our narratives to the commercial side of the business. I focus on setting up our internal initiatives and clientele engagements up for success across both the short-term and long-term horizons.
What lessons can others learn from your story?
Looking back, while I have had several gigs already, I think I would have taken even more risks and perhaps ventured to spin out a company or two of my own, as I’m heavily driven by new experiences in my personal and professional life.
When we enter the workforce, we continue to craft our skills for our current job role. As we continue to progress in our careers, we pick-up new skills, however, these are very much related to what is asked of us from our current job. These are our Opportunity set A skills, but we don’t do enough to expand our Opportunity B, C, and D repertoire. Gartner analyst Jason Pfeifer refers to Opportunity set B as those that nobody is asking us to do, but available to us, to further grow. In the early stages of my career, I often focused on Opportunity set A, but as I progressed in my career, my toolkit became much broader, juggling a variety of responsibilities, particularly in the cultivation of more creative and strategic aspects, beyond technology.
Can you tell our readers about the most interesting projects you are working on now?
Prior to joining Beyond Limits, I was involved in building and architecting AI Platforms. At Beyond Limits I am currently facilitating the strategy, positioning, and technical architecture in building out our end-to-end (E2E) Hybrid AI Cognitive Platform across a variety of use cases and deployments. It’s an interesting project, as it encompasses the entire AI lifecycle via in-house development and partnerships. In contrast to ML Platforms for technologists, e.g., Data Scientists, ML Researchers, ML Engineers, this platform is targeted at knowledge-driven industry, so in addition to the underlying technology, design and user experience (UX) utility is a heavy emphasis.
I am particularly interested in the risk mitigation, trust and safety, and explainable reasoning and audit trail themes, in addition to other signals we are exploring to include or extend the platform to. Having designed embedded systems for many years prior, the model optimization deployment to resource constrained embedded devices with low memory footprint, computational budgets, and connectivity limitations are quite appealing to me as well. It’s a bit of a different spin than traditional edge device and ML accelerator devices have today.
None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story about that?
I’m shaped by the collection of my experiences and interactions with people, rather than any single individual. I feel it’s a bit related to the “Who are your five (5) closest Chimps” Theory from Zoology learning, which was discussed in a recent episode of the Tim Ferriss Show. My circle is composed of people from different experiences, careers, and generations of life. Some are colleagues from former employers, but many are random interactions who helped craft my experiences and look at things from different lens.
The common thread is having “mental space” to breathe, taking time to recharge, branching out, and pivoting to different career trajectories shouldn’t be seen as a negative, but rather celebrated. There really is no incorrect path, just a different path.
What are the 5 things that most excite you about the AI industry? Why?
- I’m fascinated by technologies that relate to AI Safety (risk mitigation, reduction in uncertainty, stability, and robustness, etc.), particularly in the context of data scarcity where resources for AI decision-making are profoundly limited. The AI Safety theme has a place in a large variety of industries, and the potential use cases for the corresponding technologies can dramatically affect the method in which users interact with systems and adopt technologies. Today, purpose-fit guardrails and preventions are still lacking, making the AI models susceptible to silent failures.
- Recent industry and market events have highlighted the need for additional tactics and methodologies when we don’t have the underlying historical data to model or synthesize it. I’m looking forward to flipping the way we instrument these technology innovations for us to make a bigger impact on society and be less prone to events like Covid and Recessions.
- As technologies start to become more mature, instrumented by machine intelligence, I’m looking forward to today’s experiences becoming more natural, interactive, and less restrictive. Whether it is for the consumer, or B2B context, the way we interact with technology will fundamentally change over the coming years.
- Individuals are driven by curiosity and exploration; however, many products and services tend to optimize for myopic short-sighted immediate response incentives, rather than the longer-term and positive benefit discoverable utility. I’m looking forward to these interactions being more widespread, as adaptable, and beneficial, acting as a collaborative partnership with and to the user.
- Individuals never have enough time. Looking at time as a precious resource, how can AI further optimize to give back some time throughout the day. While some of this may be the boring and mundane repetitive tasks; rather I’m looking forward to how AI can assist us further in the more creative tasks. While this is where “humans” would excel more, how can AI support us to develop more of the “surprising and novel” aspects?
What are the 5 things that concern you about the AI industry? Why?
- We should be wary of AI technologies that remove humans from the process, rather than serving as advisors, particularly those in applications that are prone to unintended biases. These could have detrimental and compounding effects as we instrument intelligence with narrow-minded viewpoints.
- The cost of being wrong or taking questionable actions are sometimes given the same “weight” as a normal action, essentially treating it all the same. This worries me because humans have a very conscious side of the brain, where they put a lot of thought on the implications before acting. However, even in “narrow AI” basic cases, e.g., even around basic mechanisms of detection and aiding what to do next, we don’t instrument this same thought process in the underlying planning via machine intelligence.
- Organizations are always looking to make cost-cutting moves, but at what cost should they do so? By utilizing AI to make more jobs to be done (JTBD) further automated and deliver value quickly, at what implication tradeoffs are we making to do so? While some tasks will become more automated, we should be careful from extending this paradigm too far.
- While we have taken some small steps towards protecting users’ privacy and instrumenting fairness aspects, we are far from where we need to be. Today, they are not really seen as a “first-class citizens”, but rather more of an afterthought. The reality is businesses have different motivations and incentives, and privacy and fairness have not received the attention they deserve. In most cases, I would say many of these aspects that fall under the Responsible AI umbrella have still much work to do.
- The environment: If we don’t take care of the effects AI is having on our environment, we will have little left to protect. Think of the effects of carbonization and emissions are increasingly having an effect in the world we live in. More focused prioritized efforts are needed around carbon neutrality and net zero, along with climate change.
As you know, there is an ongoing debate between prominent scientists, (personified as a debate between Elon Musk and Mark Zuckerberg,) about whether advanced AI has the future potential to pose a danger to humanity. What is your position about this?
It is true that AI can do great good, and it is also true that it can do great harm. I’m generally in the camp that believes AI will do more good than harm. However, it will require constant vigilance on the part of government, industry, and citizens to place the proper guardrails to ensure the AI does not misbehave and act untrustworthy. While AI development and operationalism has matured in the last decade, this is an area where we still have a lot to grow around.
At Beyond Limits, we believe that the real potential of AI lies within a symbiotic relationship with people, assisting humans in the partnership to apply their attention, experience, and passions to solving critical problems. That’s why we are committed to our pursuit of evolving methodologies under the responsible AI umbrella through solutions that offer human-understandable reasoning and explainable decision paths in the presence of ambiguity and high uncertainty conditions.
What can be done to prevent such concerns from materializing? And what can be done to assure the public that there is nothing to be concerned about?
Today, the state of many technology efforts are focused on the paradigm of “reaction”, e.g., instrument monitoring for potential violations or potential secondary bad actors in the system and alerting to remediate. However, as the AI field evolves, there is an opportunity to shift from this “reactive” approach towards a “preventive” approach. This preventive approach needs to be instrumented by the underlying technology.
Further, when users interact with products and services, there is a lack of transparency around the limitations of weaknesses of its capabilities around its usage. By transforming efforts to clearly communicate this and communicate the mitigations that are put in place, this should help make strides towards improving the public awareness.
How have you used your success to bring goodness to the world? Can you share a story?
Some of my past and current technology efforts have impacts towards addressing a more inclusive voice, extending beyond technologists, such as my past contributions at the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS). In addition, many of the technologies I help create try to adopt practices around mitigating unfair or detrimental consequences.
As you know, there are not that many women in your industry. Can you advise what is needed to engage more women into the AI industry?
There’s a vast need for varying skills, expertise, backgrounds, and points of view. My path was not direct, and others should not feel just because they decided to take upon a different trajectory with a different set of skills and academic qualifications that they cannot pursue a career in the AI industry either. The field is broad, and there are many ways to contribute towards it, from the underlying technology aspects to more supporting efforts. AI will touch every part of our lives, and like politics, you need to get involved if you want to have a wide set of voices in shaping our AI future.
What is your favorite “Life Lesson Quote”? Can you share a story of how that had relevance to your own life?
“We are what we repeatedly do. Excellence, therefore, is not an act, but a habit.”
This is one (1) of my three (3) tattoos from Aristotle on my arms. It can often be a misunderstood quote with a few interpretations. This is often the case in life, many interpretations, without absolutes.
This quote further resonates with me, and the tattoo acts a reminder, as it’s crafting a foundational way of our behavior for living, to strive for the better through good practices and habits, rather than any sporadic sequences of events. However, this requires some mindfulness upon where we place our focus on.
I like to think I am a bit more selective in where I choose to spend my time. We have a limited amount of time, so our time is a precious resource, we need to guard against how our time is used. Utilizing this resource, I strive to stitch together a series of moments via thoughtful consistent and motivational behaviors.
You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)
Different communities will associate and value development and progress differently. However, with one of my technology focus areas being around the “Responsible AI” umbrella, with corresponding interest in fairness and equality for marginalized and undeserved communities, I would like to see larger attention brought to individuals needs being served on “similar footing.” This could be from how consumers and business are using technology, or more broadly across specific social and political movements, attaching to ongoing efforts and conversations around climate change and net zero carbon emission.
How can our readers further follow your work online?
You can check out my LinkedIn here and my website here. I also recently spoke at the Big Data and AI Toronto conference last year, regarding Patterns of an AI Lineup Card. I will also be speaking at the same conference this coming October on the following sessions: Future Trends and Innovations in AI, and Strategies to navigate the AI Technology Landscape.
This was very inspiring. Thank you so much for joining us!
About The Interviewer: Tyler Gallagher is the CEO and Founder of Regal Assets, a “Bitcoin IRA” company. Regal Assets is an international alternative assets firm with offices in the United States, Canada, London and Dubai focused on helping private and institutional wealth procure alternative assets for their investment portfolios. Regal Assets is an Inc. 500 company and has been featured in many publications such as Forbes, Bloomberg, Market Watch and Reuters. With offices in multiple countries, Regal Assets is uniquely positioned as an international leader in the alternative assets industry and was awarded the first ever crypto-commodities license by the DMCC in late 2017. Regal Assets is currently the only firm in the world that holds a license to legally buy and sell cryptos within the Middle East and works closely with the DMCC to help evolve and grow the understanding and application of blockchain technology. In addition to his role with Regal Assets, Tyler is a regular contributor to Forbes, Arianna Huffington’s Thrive Global and Authority Magazine. Tyler has also been featured in many news publications and has been a guest expert on “The News with Ed Shultz”. Tyler is a proud member of the Forbes Finance Council a private invite only-group of hand-selected industry leaders.