Daniel Langkilde Of Kognic On the Future of Artificial Intelligence

An Interview With David Leichner

David Leichner, CMO at Cybellum
Authority Magazine
14 min readSep 26, 2023

--

AI will help us discover new things about the universe. There are so many things to learn that we need all the help we can get. By harnessing AI, I think researchers will be able to make discoveries that currently are out of reach. Science will still require asking great questions, something I think humans will still be responsible for. But quickly exploring concepts and connecting data in collaboration with AI will accelerate our expansion of human knowledge.

As a part of our series about the future of Artificial Intelligence, I had the pleasure of interviewing Daniel Langkilde.

Daniel Langkilde is a world-renowned expert in Machine Learning, and is passionate about the potential of ML and AI to address a wide range of important issues, from practical problems in product development to the larger challenges we face as a species. As CEO and Co-Founder of Kognic, he spearheads a team of data scientists, developers and experts in Artificial Intelligence (AI) alignment to provide the leading data platform for performance-critical applications such as autonomous driving. Prior to founding Kognic, Daniel gained extensive experience in delivering machine learning solutions at global scale as Team Lead for Collection & Analysis and at Recorded Future, the world’s largest private intelligence company maintaining the widest holdings of interlinked threat data sets. Daniel earned his M.Sc. in Engineering Mathematics at Chalmers University of Technology, where he also served as President of the Student Union and a Member of the Board of Directors, and has been a Visiting Scholar at both MIT and UC Berkeley.

Thank you so much for joining us in this interview series! Can you share with us the ‘backstory” of how you decided to pursue this career path in AI?

I am a robotics nerd, and I’ve been obsessed with machine learning for as long as I can remember. I’ve always had a fascination with how the human brain works, and a desire to figure out if we can somehow infuse some part of that into machines. I started building robots in my parents’ basement when I was 11 or 12 and I guess I haven’t stopped since. I’m pretty humbled by the challenge, because humans are extraordinarily sophisticated and we are most likely still just scratching the surface. But I’ve always had a very deep level of excitement around this kind of intellectual exploration.

I learned to program in my early teens to make my own autonomous machines smarter. After high school, I was selected to attend the Research Science Institute at MIT, where I decided to get an M.Sc. in Mathematics to better understand the theory behind machine learning. Since then, I’ve also been a visiting Scholar at AMPLab UC Berkeley working on human-machine collaboration and AI.

After graduating, I became the first ML-hire at Recorded Future, the world’s largest private intelligence company, where I was responsible for Collection & Analysis. Recorded Future was funded by Google Ventures and In-Q-Tel. I worked there for almost 5 years and the company was eventually acquired by Insight Partners for ~$780M.

I co-founded Kognic in 2018 with Oscar Petersson, another engineering physicist who instead started his career in management consulting and M&A. Together, we developed a machine learning platform that has evolved to become a core toolset in the fields of Advanced Driving Assistance Systems, Autonomous Driving and Active Safety development. Several of the investors, Board and team members at Kognic either work or used to work at Recorded Future.

What lessons can others learn from your story?

Always play the long game, and always be busy building. I’ve been investing in learning as much as I can every day since as long as I can remember. If you continue to add another brick every day, eventually you have something pretty amazing. But it takes time, so you have to enjoy the journey.

Can you tell our readers about the most interesting projects you are working on now?

My current company, Kognic, provides a data platform that accelerates Machine Learning (ML) for performance-critical applications such as autonomous driving and robotics. We offer an end-to-end software platform focused on Artificial Intelligence (AI) Alignment that helps improve safety for automotive manufacturers and other types of mobility industries.

Kognic addresses AI alignment, a very practical issue that has an enormous impact on product development — specifically on the ability to steer products toward the goals and preferences of the people that use them. To date, the biggest bottleneck in making AI products useful for consumers has been around the lack of AI alignment. We’ve seen huge improvements in AI technology and performance over the last few years. At this point, AI systems can learn almost anything. Consequently, the constraint is no longer on the AI model’s ability to learn, but rather on our ability to express exactly what it is that we want the AI model to do.

The Kognic platform is currently being used by technology leaders such as Qualcomm, Zenseact and Continental which provide Advanced Driving Assistance Systems & Autonomous Driving products (ADAS/AD) that power global OEMs like BMW, Ford, Volvo, etc. We typically interface with CTOs, Engineering and Product Managers who are responsible for Autonomous Driving and Active Safety Development. Although our first proof points are in the autonomous vehicle space, we are moving into adjacent markets such as robotics and supply chain logistics. Everything that moves will be autonomous in the future!

None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story about that?

I’m fortunate to have several mentors who help me navigate being a young founder and CEO. The three most important are:

  • Staffan Ingeborn is the Chairman of Kognic. He was the first professional venture capitalist in Sweden, starting in the 1990s. He has been both an operator and an investor for decades, and while technology changes, people stay the same. He is always optimistic and encourages me to dream big. Every week, we have a call to discuss things in the company, and he always makes me feel like it will all work out.
  • Dan Werbin is a veteran of the automotive industry who has had several C-suite roles, including CEO of Volvo Cars North America. We met while I was still a student, and he helped me realize the power of learning a craft. When I was looking for my first job, he helped me find Staffan Truvé, and advised me to take on a role as an individual contributor to learn from the ground up. He has taught me a lot about the power of strategy and vision, and how to best align a company around a direction.
  • Staffan Truvé has worked his whole life in the intersection between research and industry. He is a visionary technology leader who has co-founded more than 15 research based technology startups. He has also been CEO of SICS, the Swedish Institute of Computer Science. In 2009 he co-founded Recorded Future, the world’s largest privately owned intelligence company. I was lucky enough to write my masters thesis for Staffan, and then joined Recorded Future in the early days. Staffan is insanely driven, constantly learning and deeply optimistic. Every day, I try to be like him.

What are the 5 things that most excite you about the AI industry? Why?

  1. AI will unlock more resources needed to solve global challenges. The world has so many challenges right now — ranging from climate change to the migration crisis to widespread poverty, and more. If we just had more resources, more time and more human capital, perhaps we could make more progress toward overcoming these challenges. It’s in our collective interest as a species to use AI to be more efficient and more capable in solving these problems. And if we can use artificial intelligence as a force amplifier to help us in that, I’m genuinely optimistic that it will make the world a better place.
  2. AI will help us discover new things about the universe. There are so many things to learn that we need all the help we can get. By harnessing AI, I think researchers will be able to make discoveries that currently are out of reach. Science will still require asking great questions, something I think humans will still be responsible for. But quickly exploring concepts and connecting data in collaboration with AI will accelerate our expansion of human knowledge.
  3. AI will remove a lot of boring jobs. I think most people want to work on interesting problems, yet a lot of work is repetitive and boring. The more we can hand such tasks over to AI, the happier we will be. Some are worried people will end up without a job if this happens, but I’m actually not so worried about that. New, more interesting jobs will emerge.
  4. Embodied AI is getting ready for real-world application. Most AI today is virtual. While chatbots and image generation are very cool and useful, I think the really big breakthrough is when AI enters our physical world. Automated driving is one of the first, major applications of embodied AI. But that is just the beginning. Robots are still mostly confined to fulfillment centers and manufacturing plants. I think we are on the brink of seeing them entering human space.
  5. AI will revolutionize medicine. The human body is still largely a mystery. We have no idea how the brain works, and there are a lot of incurable diseases. I think AI will be a major accelerator for novel drugs and treatments. I expect medical doctors will use AI to diagnose and treat patients in the near future.

What are the 5 things that concern you about the AI industry? Why?

  1. Short-term, I’m most worried about practical things like misinformation, carbon emissions and workforce exploitation. There are negative side-effects of the race to develop more capable AI. Language models are lowering the cost of misinformation to zero, energy production to power huge GPU-farms emits massive amounts of CO2, and some dubious AI companies exploit their employees in click-sweatshops for low wages. These are all issues I’m confident we can solve, but that does not make them any less serious or urgent. In the next few years, the hardest problem will be to steer the systems to do what we want them to do. If we can train the ML models properly to be self-improving over time and align them sufficiently, it will benefit almost every industry.
  2. There are people who view AI as an imminent problem, and who primarily associate it with fear of existential risk. The issue of AI alignment has been hijacked to a certain degree by this line of thinking. I acknowledge that there’s a long-term, deeper AI alignment challenge that we have to get right, but all forms of technology, automation and industrialization throughout history have been disruptive to the workforce. The division of opinion that has emerged comes down to whether we believe the type of systems we have today can become self-improving in the near future.
  3. Today, it’s clear that AI devices are not nearly as good as humans in many tasks. So the first question is, can they potentially start making themselves better? And the second question is, if they can become self-improving, might they spiral out of control and become super intelligent? At this point, no one really knows. There’s no scientific or logic-based method of understanding whether it’s likely or not.
  4. We are likely to see an explosive increase in the number of companies and organizations deploying AI without properly testing it. This may lead to an increased amount of misinformation and failures that may be very damaging to individuals. There is a risk of introducing hidden biases at mass scale to important services, such as banking and mortgages and credit scoring which could make things very unfair for the people who rely on those services. As an industry, we can eventually overcome these problems, but I’m concerned there may be a lot of unnecessary damage in the meantime.
  5. There is also a risk that some companies could start launching immature AI-powered mobility functions without the proper amount of testing, and exaggerate their reliability in how they are talked about in the media. That’s already happened in some cases, and people have been injured or killed as a result. And unfortunately, I’m expecting it will get worse before it gets better.

As you know, there is an ongoing debate between prominent scientists, (personified as a debate between Elon Musk and Mark Zuckerberg,) about whether advanced AI poses an existential danger to humanity. What is your position about this?

First of all, I acknowledge that if we manage to create a system of self-improving AI, that will most likely lead to a runaway increase in intelligence. We have no idea what such a system would decide to do. It’s possible it would solve many world problems, but it is also possible that it would destroy us all. But — and this is a big but — we have no idea if we are even close to such a system. Some argue that recent progress with GPT-models has brought us much closer to self-improvement. When results started to indicate that scaling laws would hold for GPT models (i.e. they kept getting better as they got larger), I’ll admit that my beliefs were tested. We definitely have significantly more capable systems today than we did just a few years ago, and it’s possible that our most recent algorithms can be optimized to do more things. But I still think the human brain is much more complex than these algorithms. Without new paradigms emerging, I think we will see this marginal improvement of capabilities leveling off.

That isn’t to say there won’t be short-term negative consequences of AI. We definitely need companies to deploy AI in a responsible way, by taking full responsibility for their entire value chain, including the emissions created by GPUs, the working conditions of annotators, the deployment of language models for disinformation, fake visual content, and other potentially harmful applications. I don’t think these problems pose an existential risk, but they could certainly cause damage. The potential is there, and the technology already exists.

What can be done to prevent such concerns from materializing? And what can be done to assure the public that there is nothing to be concerned about?

We need to make sure that humans remain in charge so that we can override, provide feedback, or potentially even turn off certain systems. That’s why we make it our mission to accelerate alignment. We want to make it possible for humans to steer AI towards their intended goals.

As you know, there are not that many women in your industry. Can you advise what is needed to engage more women into the AI industry?

One source of perspective for me is my younger sister, who decided to become an engineer. I’ve asked her about her experience, and many of the things she shared with me have impacted my thinking. First of all, it is important that we break down the stereotype of engineers being nerdy and anti-social. The visual of a hacker as a guy in a hoodie drinking Coca-Cola in the dark is really destructive. At Kognic, we’re trying to do our part by inviting female students to inspirational events, and we make sure our events are inclusive and fun. I think you have to inspire girls early, and it’s in the interest of tech companies to contribute. We need more software developers, and the more diverse our teams are, the stronger we become.

What is your favorite “Life Lesson Quote”? Can you share a story of how that had relevance to your own life?

“The purpose of life is to learn more about the universe. A life properly lived is just learn, learn, learn.” Charlie Munger, American businessman, investor, and philanthropist

I’ve always optimized my life for learning, since as long as I can remember. I learned to program when I was a teenager, so when it was time for university I decided I’d get a mathematics degree. That way, I’d expand my knowledge more in an area where I was less proficient. When I was looking for my first job, I cared less about the salary than about what I would learn. I decided to get a job as a Machine Learning Engineer and learned how to build large-scale software systems. The ultimate learning experience so far in my life has been to start a company. Every day, my job evolves and new expectations emerge. It’s extremely exciting and rewarding for someone with my preferences.

How have you used your success to bring goodness to the world? Can you share a story?

In our work at Kognic, we are addressing practical challenges that face product development teams as they work to steer and fine-tune their AI-driven products. Our work already makes automated driving solutions safer, which in turn helps save lives.

Accelerating alignment is the key to solving many AI-related problems, across practically every industry. We are currently focused on embodied AI — using AI to build systems with physical embodiments. These systems are expected to work among humans, so ensuring they are safe and reliable has a very concrete positive impact on lives.

In the past few years, data scientists have made enormous progress on the science and technology of AI. But in order for this technology to be adopted, appreciated and enjoyed by people, we now need to ensure that they feel safe and happy when interacting with embodied AI products.

In order to function properly, AI products need to learn the language of human preferences. Humans interpret things differently, and develop preferences on their interpretations. There is a lot of ambiguity and subtlety part of that process, whether we are looking at language models, self-driving cars, or even the spam filter on your email. AI alignment helps us apply the same kind of logic around user preferences, regardless of the specific application.

Capturing the preferences of users is a very difficult challenge for engineering teams, but if we don’t get this right, we will see a lot of disappointment in the next few years. Customers will reject new technology, and in the worst case, unnecessary accidents will cause human suffering. Besides the moral importance of preserving human life, such setbacks will make it very difficult for businesses making bold promises and aiming to deliver great services and products. So, it’s in our collective interest to get this right. We believe that the solution to the AI alignment issue is to create excellent software that enables our customers to build better products.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)

I would like to see tech companies take more responsibility for their global workforces. AI companies rely on large groups of people in parts of the world where wages are low to create and curate datasets. It makes me sad every time I learn about the way these people are exploited. Paying people less than $1/h to label data while building a billion-dollar tech business is immoral, in my opinion. By highlighting this, I hope consumers and business leaders will raise their expectations on the services and products they use. We can do better!

How can our readers further follow your work online?

You can always find me on LinkedIn, and on Kognic’s blog.

This was very inspiring. Thank you so much for joining us!

About The Interviewer: David Leichner is a veteran of the Israeli high-tech industry with significant experience in the areas of cyber and security, enterprise software and communications. At Cybellum, a leading provider of Product Security Lifecycle Management, David is responsible for creating and executing the marketing strategy and managing the global marketing team that forms the foundation for Cybellum’s product and market penetration. Prior to Cybellum, David was CMO at SQream and VP Sales and Marketing at endpoint protection vendor, Cynet. David is the Chairman of the Friends of Israel and Member of the Board of Trustees of the Jerusalem Technology College. He holds a BA in Information Systems Management and an MBA in International Business from the City University of New York.

--

--

David Leichner, CMO at Cybellum
Authority Magazine

David Leichner is a veteran of the high-tech industry with significant experience in the areas of cyber and security, enterprise software and communications