Back to the Present: Leaving the self-driving car in the lab
By Eric Danziger and Prateek Sachdeva
Invisible AI is a visual intelligence platform building the next generation of computer vision solutions for manufacturing. The company has created an easy to use AI-enabled camera that tracks body posture and movement to automatically prevent assembly errors in real-time. Invisible AI is deploying their cutting-edge technology at major automakers, including Toyota North America.
Deep learning and “AI” were meant to lead us to the promised land. From self-driving cars to in-home robots, 2020 was supposed to be the year AI took over. Yet, most AI companies have pivoted into consulting business models while many others face R&D delays. At Invisible, we believe that AI can prove to be transformational over time but it needs to be deployed in very targeted use cases today in order to flourish. This philosophy has led to our first product — a no-code computer vision solution that can be deployed in 15 minutes.
Before founding Invisible AI, we worked at Luminar, a self-driving company whose LIDAR sensors will power the future of autonomous vehicles. Together, we grew Luminar’s software team and built the backbone of their advanced self-driving systems. However, over time, the barriers to reaching full self-driving became clear to us. The crux of the problem was that most algorithms in-use only had a surface-level understanding of the world. This reality was further echoed by industry-wide delays from the most advanced self-driving companies. We saw the industry’s excuse evolve from “it doesn’t work yet” to “we need more data”, even with billions of miles logged.
Invisible AI has developed a no-code solution that works out of the box without long lead times, bandwidth needs, or other pitfalls of current AI solutions.
While deep learning is great for narrow use-cases, problems emerge when we dig deeper. Siri and Alexa can’t help with queries more complicated than “start a timer”. Natural language models produce realistic sounding content that is devoid of all meaning. We know that technology will get better over time, as evident by the last decade alone. Researchers will continue to push the envelope — and so will we. However, in order to build a company that can provide decades of innovation, we have to strike a balance between practical business models and investments in the future.
Our experience in self-driving made it abundantly clear that we cannot depend on the false hope of end-to-end deep learning to solve everything. We cannot just say the phrase “AI” for every problem and walk away. This approach is a data-heavy beast that does not scale across domains and results in a consulting business model for most AI companies, as noted by the folks over at a16z.
Invisible AI is here to change that.
We left self-driving with two key lessons: (1) edge compute is required for any real-time computer vision application; and (2) customers want an end-to-end AI solution that’s easy to use and deploy. With these requirements in mind, we developed a no-code product that works out of the box without long lead times, bandwidth needs, or other pitfalls of current AI solutions.
We saw the self-driving industry’s excuse evolve from “it doesn’t work yet” to “we need more data”, even with billions of miles logged.
At Invisible, we have built an AI-enabled camera that monitors body posture and movement to automatically prevent assembly errors in manufacturing facilities. Our key advantage is that anyone can deploy our camera within 15 minutes — without writing a single line of code. In addition to the simplicity of our solution, we have designed our product as an error-proofing tool that works with the employees, instead of replacing them.
With our proprietary neural networks and algorithms, we are able to reduce the dependence on enormous amounts of data collection that would be impossible to scale in manufacturing. An average automotive factory has over 300 assembly workstations and if our solution required four weeks of data collection per process, we’d spend 20 years just labeling data.
In addition to quick deployment times, our cameras have built-in AI chips that ensure all processing happens locally and no video ever leaves the camera. This lets us avoid most IT objections (can’t avoid them all!) and allows our customers to deploy hundreds of our cameras without a hiccup to their IT infrastructure. Most customers not only lack the necessary bandwidth to upload 1080p video streams to the cloud, they also prefer to avoid any cloud interactions due to privacy and legal issues. This further exemplifies the need for edge compute in any practical AI deployment.
Forgetting the bandwidth, privacy and legal issues, it is also incredibly expensive to implement real-time computer vision in the cloud and those who do are effectively pouring all venture capital money into the pockets of Jeff Bezos. While we pursue an edge strategy, we encourage others to continue with the cloud route because we own a bunch of AMZN stock in our respective IRAs :)
Our core philosophy around computer vision and its practical future has gotten us paid pilots with multiple automakers, most prominently with Toyota North America. We also recently raised a $3.6M seed round, led by 8VC, to grow our team and product. The future consists of intelligent cameras that just work and Invisible AI is at the forefront of this revolution.
While we all wait for self-driving cars, check out our website at www.invisible.ai to join us on a more exciting journey.