This is a second in a series of why at Kibo we invested in the companies that we did. One thing you will see as a thread in the series is that we are totally passionate about our investments, we truly believe these companies are going to go off and do great things.
Last February we invested in Vilynx, a type of company I am a big believer in, one which I call, a hybrid company, HQ in Palo Alto and tech team in Barcelona, probably best of both worlds if you ask me.
So, what does Vilynx do?
Being very bold, Vilynx is set to make video searchable.
If you think about the vast amount of video that gets generated every day, you begin to understand the size of their mission. How do we build a brain that can understand video content, that can understand how different content is related to video images and do it in such a precise way that searching for concepts or content provides accurate results?
This is a problem solving exercise fit for the likes of Google and Facebook with access to massive resources and video content, yet Vilynx is ready to provide an alternative. An alternative for media properties that will not want to link their monetization business to G or F technologies.
If you are already interested in the big problem Vilynx is trying to solve, let me walk you through why we think this incredible team can do the job.
- Tackling the problem head on, accessing the data
Vilynx is working with big media houses in the US and Europe to have access to all of their video content. They take this content and analyze all the contextual information around the page where the video is published. They take each URL where the video is published and see where it is being shared in social media. This generates a massive amount of information around the video.
2. Data organized in a knowledge graph
Vilynx feeds this data onto its own knowledge graph that can relate all the different tags that would get generated based on the contextual information and checks for inconsistencies and accuracy. The knowledge graph (KG) learns every day, more information gets fed and the accuracy, but more importantly the quality improves. We are talking about a KG that has to date over 5.5M tags and processes over 10k videos per day.
3. Solving for the mamoth processing task
Vilynx machine learning algorithms need to process massive amounts of data and this data needs to be run over the KG as frequently as possible. Right now Vilynx is stretching Intel’s systems for data processing as Intel understands they are a great case for them to learn about their own platform, all of this cost free as a partnership for both.
4. Incredibly big vision
Media is the first vertical Vilynx will go after. Media houses need to improve video monetization and Vilynx provides product features that will help them do that, video preview generation, recommendation engine, trending and discovery product and search. But media is just the first vertical. The KG will get better and better and can process video coming from security cameras (think smart cities), drones, autonomous cars. Think about video and think about understanding what is in it, very cool things come to mind.
5. Best minds at work
And as is ALWAYS the case, no good company comes without a stellar team. Juan Carlos Riveiro @mrbrowngigle is one of Spain’s entrepreneurs with a very successful exit to speak for (Gigle Networks sold to Broadcom) and a very clear vision of what neural networks can do when applied to the video vertical. Juan Carlos found his perfect match in Eli Bou @elisenda_bou, truly the smartest CTO I have encountered in my life both as entrepreneur and investor, a true rocket scientist, MIT and UPC and a true believer in the power of the KG the team is building. And most importantly a clear magnet to attract the best Machine Learning talent around.
And you might be thinking, but doesn’t Clarifai do this for images and video?
Well, no, not really.
While Clarifai and Matroid are tackling this problem getting video from users and getting data training sets from them to then improve and get to a very high level of accuracy, Vilynx is going at this in a different way. They are getting the data directly from the Internet, big chunks from media groups and other available data online and they are solving for scale by parceling that data in 5 second videos that can be processed and run over a centralized KG to improve and learn constantly. These other players can get a very high degree of accuracy from data they get from users but they cannot learn from a new video if no training set of data is linked to it. Vilynx was able to identify for instance the “no hand shake” in the Olympics based on all the contextual information around the video that gets processed with their KG. These other players cannot do that, they are solving for accuracy on video content once you know what the video is about. This is my best way of explaining the key real difference. I hope it works.
So, you might not have heard of Vilynx before, as many startups in Spain, they have been very much on stealth mode, working incredibly hard but with no PR to put them in the spotlight. Now you know about them and maybe you will remember when you first read about them. One to watch, stay tuned.