Why we invested in (well, co-built) AUTOMI
At OSS Ventures, we invest in and co-build the software stack of the future of operations. As venture builders existing since 2019, we created 12 companies of which 8 performed a series A, and are present in more than 800 factories throughout Europe. As builders of solutions for the manufacturing world, we are lucky to be at the forefront of the future of operations and are grateful for it. We took a habit, when a company gets created, to write the short story of how it went and the things we learned. You may find in the writing below interesting as it lays down the early stage story of Automi at an early stage, and sometimes some great jokes and pictures made the cut.
The strange state of the manufacturing workforce.
We first started to get a strange feeling at the end of the 2021 year. A couple things that were undoubtedly true and yet they did not made sense when put together :
- Top three challenge of any manufacturing CEO was recruiting ;
- Productivity of operational people in manufacturing was not meaningfully increasing for five years ;
- On the shop floor, every single operational factory worker complained about fragmentation of work, workload, and especially repetitive quality control tasks.
Both lacking new operational factory workers and lagging behind in automation while struggling to recruit seemed counter-intuitive. So we set out to pre-explore the space by visiting 30 factories from the ground up and understand the state of the workforce. The results surprised us :
- In the EU, more than half of the operational factory workforce is aged over 45. The loss of know-how each year passing with those people going out of the workforce is an immense loss for the organization. For one particular luxury brand we talked with, it was the main reason why they faded into irrelevance (and sadly had to close shop) ;
- About 30% of all the hours worked in the EU in factories is visual inspection (looking at something to make sure it’s OK for quality, or looking at some things to count them) ;
- Only 5% of visual inspection tasks are automated, because the projects to do so are very costly, require special types of cameras that are 5K+e and a yearly license fee, and for special cases a data scientist to come and train the model on one case only, which cannot work because … ;
- 95% of visual inspection tasks are “two hours a day, fractionated over an 8 hour work shift” type of activity. Most of factory workers perform visual inspections of various sorts, of very different pieces, fragmented over their actual workday. Which is incompatible with “100ke a year, 5Ke camera, only one use and one algorithm” type of visual inspection current solutions.
Sounds like quite a fun challenge to try to solve. So we embarked to set operational free of boring visual inspection tasks while solving all of the above.
Galem, deep tech and the business/tech equation
The fit between Galem, OSS and the space is almost too good to be true. A former rocket engineer (yes, an actual one) from Germany and the US and South Africa, he also did two years as a product manager for Ubuntu’s IoT and camera division. Sharing so many of our values, skillset and way of working, it was an instant hit. We set to work.
To solve the equation laid out, our plan was simple :
- Find the technology that could perform the visual inspection by its own ;
- With reasonable (2 hours) training time, … ;
- With reasonable accuracy (5 errors per 1000 positive signaling, same rate as human intervention), … ;
- … and that the average blue collar worker could operate by herself.
We laid out the thinking to a test panel of three factories. The three factories said yes instantly. This was kind of a first for an OSS cohort, as usually we get more of a 15 to 35% positive response rate. We understood that the value to be unlocked was immense, and the blocker was the tech product. So we took the three cases and tried to automate away.
In less than one week, we automated the three tasks in the three different factories without even bothering to do a real, repeatable product. Just wanted to learn. And we learned :
- You can use any reasonable camera + edge computing of the market and it’s very simple to set up ;
- With one hour of data scientist to choose the right model given the problem to solve, the tech was sufficient to automate with less than 2 hours training time ;
- The shop floor workers actually loved to automate those tasks (which was a surprise to us) because they consider it’s boring, repetitive and not value-added ;
- There was almost always some process, software or data repository where putting the results from the visual inspection was important to not break the operational flow of the factory.
So we were convinced it was technically feasible to put cameras on the shopfloor, training a machine learning model and automate small visual tasks in an efficient manner.
And from a commercial point of view, the co-builders started asking for more automations, faster, in all places of the factory.
Life seemed grand.
There were two buts.
- The “choosing the model”, across dozens of different potential machine learning parts, depending on the task on hand, various parameters, was expert stuff. Deep and complicated expert stuff. No way to teach that to a factory worker nor anyone in the factory to be honest. And very costly. Galem and a few of the PhDs we had in the team could. But not a lot of people could. That does not scale. That does not scale at all.
- The “setting the camera and putting the software in” part. The average factory worker does not have the skills to install the camera, direct the streaming flow to a software, and set up the thing. We explored several options such as producing our own cameras (that does not scale at all, it’s a bad idea, and it’s a negative margin business on average), hiring consultants (that does not scale at all either) and troves of other stuff. Everything seemed to push us down to a consulting, non-scaling business model.
Those two blockers were a b* to deal with. Our initial market study was very clear that what we were trying to do had been tried before. Some companies had been acqui-hired after venturing into creating their own camera + special chamber to get the same lighting every time + setting everything up themselves (it did not scale. It was too complicated and costly. They failed). Some companies had been trying to compete with the old companies by just putting out their software and cameras on high-volume visual inspection. It did not end well either.
We were stuck. But at least we knew why, so we had to solve this.
Nerds to the rescue. Twice.
The first “a-ha!” moment occurred while touring a UK plant. We were very happily received by the local factory director and a young engineer who presented himself as “the local nerd”. And boy was he a nerd. He had coded, himself, a bunch of 100+ Arduinos (it’s a low-code piece of hardware you can program, usually for your home) to be able to automate machines, get data streams coming from their limited sensors, and other good things. The nerd told us it was one of his passions and that he would’ve been a coder if not for his mechanical engineering degree. So it hit us :
There is a nerd in every factory of the world. An engineer who wanted to do cool stuff with hardware and software. And one who will be absolutely delighted to put in place artificial intelligence if we make it simple enough (read : no-code) for him to gift us with some of his valuable time.
That solved almost instantly our deployment and project management problem. Part of our standard routine when visiting a new factory would be to ask “by the way, who’s the local nerd ? You know, the one weirdly deep into automation and 30Mb VBA-powered excel files ?”. We never got a blank stare. 100% of casual “Oh, you met Brian ?” So, half of the thing was solved. Leverage the local nerd and make the product for her to enjoy and deploy artificial intelligence at scale in the factory.
On to the next challenge. Choosing the right model given the type of issue. We actually went round and round for roughly five months on this one. We tried defining categories and saying what the model should be based on categories. It got so complicated and inefficient it sort of became a joke. We had to solve this, and were at a loss on how. Cue one of those silent afternoons when we were all deep down staring at our screens. A nerd in the team stepped up :
“what if we train them all and see what happens ?”
And so the selector was born. The simple idea of the selector is : say you have 50 types of concurrent potential models. Instead of choosing one, you test them all on the first ten data points. By looking how they converge (or not) towards good solutions, you can rule out 25 potential models and only train those 25 on the next ten data points. And then rule out 15 of the 25, and then 5 of the 15, etc… While this from a purely mathematical point of view is highly inefficient and borderline insane, it got rid of our need for a data scientist. Because the use cases were so simple and straightforward, training 50 models instead of one was a non-subject on server costs. It took us more than three months to work out the math involved with this kind of stuff. We actually have a patent application pending for that one as, as far as we’re concerned, this is the first occurrence of actually doing this in the real world and not just as a mathematical paper.
We tested that new model in one of the three co-builders with great success. Believe or not, as OSS is a France-based company, it was at a croissant factory. If you go through the Paris Charles de Gaulle International Airport, you have a 33% chance Automi checked your croissant for quality defects. Can’t get more cliché than that.
And to the next steps.
Anyways : the product is live at the first three clients, ten more clients in the pipe, an amazing CTO co-founder found to complement Galem. We knew the feeling too well at OSS : we had to get out, as our job is to be the early hands-on companion of startups, but also the one that knows when to get out of the way for the next chapter of the company and the founders story to unfold with the OSS community at a distance.
We wired the money and celebrated the parting way with a trip to San Francisco where we met with VCs and the SF ecosystem. Automi fundraising was already overbooked and crowded, and Google had invested in Galem, nominating him to be one of the 10 recipients of a Google grant.
Here’s to liberating blue-collar workers from the repetitive tasks that plague their work. Here’s to venturing into deep tech without losing the speed that makes OSS ventures such a particular bunch. Here’s to incredible founders. Here’s to the power of the nerds. Here’s to Automi.
If you read this far, you’re likely very interested in this story. At OSS Ventures, we fuel on having incredible founders joining, ambitious factory executives taking the leap of faith and working with us, and ambitious investors joining in. If you fit in one or all of the above categories, we want to hear about you. firstname.lastname@example.org . Hit us up. Maybe we can fill a patent together some day and enjoy some friendly nerdy banter. We love us a good D&D game.