5 early stage European startups enabling better human-machine relationships

Jad El Jamous
Humanity Sparks
Published in
9 min readMar 5, 2018

In my last two lengthy posts (where I analyzed the computer vision, augmented reality and future human-computer interaction markets) I tried my best to paint a clear picture of where user interfaces are headed in the next decade. I drew the path from a world where smartphones and PC are the only devices we interact with, to a world of autonomous robotics and ubiquitous systems of intelligence.

The second post on HCI however, revolved around an intelligence “outside of us” that, unlike many predict, optimizes towards human augmentation — or the intelligence “inside of us”. Humanity has always dreamed of a world filled with smart robots, self-driving cars and connected things that have a life of their own, but for some reason we are divided on who they will be working for. In the most senseful picture of the future, I believe that we should and will keep calling all these creations “devices” because they will still be still tools that people use to get things done for themselves, despite their cognitive and physical capacities exceeding ours.

For this vision to be true, these new machines will need to understand humans as humans understand each other. If these autonomous devices are working towards anything other than human wellbeing and safety— without them understanding what being “human” really means — their actions will lead to unintended consequences and possibly a crisis leaning towards human extinction. Technology philosophers are calling this problem the “AI alignment problem” or “AI safety” and are just at the start of debating the solutions.

I personally believe that one part of solution includes creating software/hardware combinations that deeply understands people emotions, behaviours and body language. This means that they will not only get involved in decision-making with human context awareness, but also in integrating user goals in every action they make in the world. One passage from my second report that I particularly think summarizes this point from the technological perspective — but was not really expanded upon-is the following:

“The combination of speech analysis, computer vision and emotional recognition are already starting to create affective human and machine collaboration and making collaborative robots our partners in daily life…The evolution of that concept means a move beyond command & control relationships to symbiotic, empathetic relationships and (hopefully) peaceful coexistence.”

The 5 startups below are working on what is the beginning of this world of affective relationships between machine and man. The common themes between all 5 are:

  • An objective of getting software that understands people to create better and safer products in the ubiquitous computing era
  • Horizontal B2B business strategies with applications across many large markets
  • Patented or patent-pending software technologies that can be embedded in hardware and sensors

1) Humanizing Autonomy

One of the first themes I looked into while researching computer vision is autonomous vehicle navigation. The main conclusion drawn from the research was:

“The main challenge that wants be solved today is that the car needs to operate under all conditions and without errors, in order to become a safe product that consumers trust as a transport option. Computer vision-enabled AV navigation is today a growth market for investors as the race to level 5 autonomy continues among major car manufacturers, yet with many opportunities for new startups to find more accurate real-time solutions such as that of Five.AI and Oxbotica — these startups are finding use cases in taxi fleets and delivery fleets.”

Five.AI and Oxbotica developed their own hardware on top of their computer vision software solutions because digging more into the mobility-as-a-service world, they’ve probably found out that they faced much competition as a software provider and that the large part of the value will be captured by the owners/operators of fleets. Yet software companies such as AImotive, which has raised around $50M to date from investors like Bosch VC and NVIDIA, are still going strong.

Humanizing Autonomy comes in at the HCI level of autonomous vehicles and aims to build a context and culture specific human intent prediction software. This would be complementary to the work of companies like AImotive, improving the stacks situation analysis (with regards to pedestrians in specific) to base its decision path planning on. What makes Humanizing Autonomy a valuable solution is that it takes autonomous driving from level 4 into level 5 — that is being the ability to to automatically operate a vehicle in any situation, including cities like London or Paris where millions of Pedestrians are encountered on a daily basis.

The company is today raising a pre-seed money after securing a contract with a big auto manufacturer to continue working on their patent-pending solution which takes the simple model of a “car running into a pedestrian block and stopping” towards a deep-learning driven novel psychological model of intent detection based on explicit and implicit body language, and on multicultural contexts. It is a no-brainer that this SaaS software has massive potential in the self-driving car market — starting with advanced braking systems that exist in cars today — but can also extend to drones and robots that will walk and fly among us in the near-future.

2) Emteq

In my first CV/AR investment report in August, I briefly wrote about XR Head Mounted displays as a new technology interface that enables more immersive experiences than smartphones.

“An additional driving factor [for replacing smartphones with XR headsets] is that there are new kinds of data that a headset can capture that a phone can’t — for example electromyography (Emteq), brain electrical data through direct links (Mindmaze), sight data showing where the user is looking through eye recognition (SMI) which makes them ideal personalization enablers.”

In the second essay, I put XR HMDs in the grander context on an IoT world where they will be one of the main interfaces that connects our field of vision to an internet flowing outside of our screens and into the physical world. This is what makes Emteq valuable. In other words, it is through giving wearables both access to muscular and internal body data and connectivity to outside devices makes them most personal interface we have ever created, a link between our inner and outer world.

Emteq is a Brighton, UK based company that is creating a complementary sensor technology to future HMDs that is able to identify user emotions in real-time by reading data from the face muscles.

The company is now testing out many use cases in healthcare (facial palsy, Parkinson disease, body pain, mental issues). Their ultimate objective is to integrate with OEMs once the costs go down, so to be able to gage in real-time the emotions of people inside simulated experiences. Emteq is currently closing a pre-seed round, in preparation of further rounds to launch their product into market.

The company, which only has some grant money in the bank, has already launched an emotion-sending VR mask similar to what Mindmaze, a Swiss company that raised $100M+ in capital, has launched. What is different with Emteq is that they’ve also been working on a separate light smart glasses solution, the OCOsense™ smart glasses which are expected in Q4 2018. This makes it great for AR, a market expected to be 5x bigger than VR in the coming years. The second difference is that Mindmaze is positioning itself a neurotechnology company, and as far as a I can predict, I am not sure if this would really possible for the future consumer-AR glasses format because it would not be enveloping the whole brain.

3) Eyeware

Eye tracking has been the holy-grail of XR and also mobile phones because of the ability to serve and load content where and when exactly you are looking. Apple has recently bought SMI, a German company with eye-tracking software that will go directly into its phones and future hardware. Oculus bought EyeTribe. Intel also bought AdHawk that does eye-tracking and gesture-tracking using MEMS devices.

While most of these require a headset sensor that sits directly in front of the eye, this seems harder to do from a distance, when we start going into the real of machines such as robots, vehicles and IoT. It will be a much needed solution that the billions of ambient devices expected to be in the physical world interact with you only when you are looking at them. This is where Eyeware’s tech comes in. The company’s technology can identify multiple eye movements of different people in a frame, and for distances up to 5 meters

The company is raising a seed round to continue working on its 3D tracking software which fits into depth sensors and cameras such as Intel Realsense and Microsoft Kinect, as well as expanding its sales effort to generate more revenues.

4) Gestoos

In my first report, I tackle the concept of people analytics with the following summary:

“After face identification and eye tracking, gesture recognition and emotion recognition are the next rising tide. We know that face identification plays a big role in solving not only the access but the trustworthiness and risk profiling challenge in digital interactions (Onfido), but combined with a bunch of other data (emotional, speech, gesture) it can say much more... Gesture recognition, on the other hand, is important today for the smart home and smart car market (Eyesight) and plays a large part in creating new ways for people to control technology by just moving the hands, for example.”

In my second post, I follow up with a more futuristic vision

“The technologies of AR and IoT will soon converge to usher in an internet made up of both digital and material objects that will surround us wherever we go. We will therefore need new technological interfaces in our reach that are both part of this intelligence ecosystem and that let us connect to it. Clarke’s Law states that “any sufficiently advanced technology is indistinguishable from magic”. We indeed want our tech to be more like Harry Potter’s wand than like a smartphone. Manifesting our intentions in the real world should be as easy as making a small hand gesture and making a wish.”

Gestoos is a Barcelona-based company that is making this concept come true. It operates today in three markets — smart homes, automotive, and digital signage in retail. Following a horizontal strategy which mimics that of its direct competitor Eyesight Technologies (an Israeli company which raised around $30M+ from VCs), it currently is generating recurring revenues and its team of 17 people has already managed to get traction with OEMs, retailers and car manufacturers.

Gestoos has just closed its seed round and will be raising again in the next 18 months.

5) Imagimob

Stockholm-based Imagimob, developes patented AI technology that allows for advanced motion recognition on small, low-cost and low-power IoT hardware devices.

8 months ago I wrote: “Human sensing technology can find its way into unlimited use cases as it will augment almost every digital offering when apps, objects and robots begin to deeply understand humans.”

The company’s technology can in fact be potentially be used in an almost unlimited number of applications in a range of different vertical segments. Imagimob’s value lies in devices acquiring motion intelligence through deep learning, which is another way machines can understand what people are exactly doing. The software can identify any gesture such as swiping with a finger, to falling or even getting attacked. A Youtube video show that the device can even detect complex gestures like drawing a house or a heart with the hands. This makes a big difference with wearables for example, when they will not just know that you’re moving, but also that you’re cycling, or swimming, or hiking, or weightlifting. Imagine a world where our devices know exactly what our body language is saying, and therefore how we’re feeling.

I see Imagimob’s strategy of embedding their software into hardware/chips as a viable path towards reaching as many customers as possible and scaling to capture most value. Their edge-AI solution creates more secure, responsive hardware that doesn’t need to upload data back to the cloud and therefore enables machines to take actions on the spot. The company has raised almost $2M in different rounds, the latest being February 2018. They are now raising the rest of their seed and preparing for a bigger Series A round.

Bonus company #6 is Third Space Auto.

The company’s vision is to make mobility hyperpersonalized by using a voice & natural language platform that learns from the rider in order to respond based on her context, location, calendar, choice of music etc.

Why wasn’t it mentioned as a main one? Because they do not want to raise VC money.

Jad El Jamous

I would like to thank Anshul Gupta from Aurelius Advisers / Truffle Hound for helping to meet with and discuss some of these companies.

--

--

Jad El Jamous
Humanity Sparks

Techpreneur. Cultural innovator. Working on 3 ventures for well-being. LBS MBA2018. Ex Growth lead @Anghami & @Englease. Digital business MiM @IEBusinessSchool.