Fringefy Co-Founder Assif Ziv: “AI has reached a point equivalent to human intuition.”

BuzzRobot
BuzzRobot
Published in
4 min readJul 14, 2017

Fringefy — urban image recognition technology

You’ve probably been in a situation where you can’t find your Uber and you have to call the driver to explain where you are. Well, Fringefy is developing the technology that enables real-time visual search of outdoor urban environments to help solve this kind of problem. This technology will show the driver the exact place and the entrance of any building. This is one of the implementation use cases that Assif Ziv, Co-founder and VP of Engineering, and his team are developing. In general, the technology recognizes in real time outdoor places and buildings. A similar technology is Google Lens and PHIND that presented their products at TechCrunch Disrupt NY two years ago.

The challenge of building an AI product

Fringefy was founded back in 2013 to help bring AR technology into the world. While serving in the Israeli Army, Ziv’s business partners, other co-founders, flew airplanes and helicopters that used augmented reality systems for military purposes. They understood how useful this was, so they launched a company that could deliver a visual search of outdoor spaces. ‘’In the beginning, we didn’t apply deep learning technologies in the product, only classic computer vision. We jumped into this trend with an already collected relevant dataset,’’ says Ziv. ‘’Today, deep learning technologies can recognize the exact place in a picture from different points of view, which is a huge improvement over existing navigational software.’’

Ziv says they collected the initial data physically themselves by driving and walking around cities (San Francisco, Tel Aviv) and through crowdsourcing.

Fringefy team

“We are building image recognition technology that can tag places and recognize them in real time,’’ explains Ziv. ‘“I’d say the industries that can benefit from our technology is the automotive industry, parts manufacturers, the mobile industry — those who are interested in providing visual search, local search — and real estate companies. But our main goal is getting technology into cars, allowing drivers to experience augmented reality when a car can recognize all the places and buildings around it.”

“Imagine the following,’’ continues Ziv. ‘’You are driving and there is a big screen in your car with a video camera. On this video you can see the names of places and buildings around you. If you are driving to a specific place, the entrance of the building along with its address will be displayed on the screen. Also, if you are looking for restaurants, the screen will show the names of restaurants along the route you are driving and provide Yelp ratings of the restaurants you are passing by.’’

I asked Ziv what AI challenges his team faced while developing the product. ‘“Measuring the product’s performance was very challenging. If we have a mobile app and users take a picture of a place, and you provide them information about the place, it is often not clear if your technology is solving the problem well or not. You have to build a very realistic dataset that covers every possible scenario of how a user might interact with your system. But you can’t know in advance how users might use new technology, so the initial dataset may not cover many important situations. This was a major issue. We had to figure this part out. In order to solve the problem, we created our own dataset that covered most of the options for how a user might use the technology, and then we were able to make precise measurements based on this dataset. Another problem is that each user is different, so for that part, we tried to assess user behavior, and took a part of the dataset that is relevant for that user behavior and asked, ‘What would be the performance of the algorithm for this part?’’’

Where are we in terms of AI development?

We have to raise a “digital baby,” similar to a human baby, in order to achieve a human-like consciousness, in Ziv’s opinion. ‘’The human subconscious makes sophisticated calculations and deductions about the external world and transfers the useful summarized results to our conscious mind so that we can perform an action or draw a conclusion. Artificial neural networks work similarly,’’ says Ziv. “So I would say that AI has now reached a point equivalent to human intuition. Algorithms can recognize a bird in a picture, but this does not directly map to a list of if-then rules that result from the observation; rather, AI operates “intuitively” to decide if there’s a bird in a picture, but does not rely on a direct script to take an action.’’

The statement about ‘intuition’ could be extended to how personal assistants will behave in the future….

‘’I was looking for a coffee shop,’’ Ziv explains, “And I had intended to go to one with an appealing signboard. But a local friend of mine who was with me said that there is another coffee shop right around the corner with better coffee. He intuitively understood what I wanted and suggested a better option, while I never asked for help or even stated my intention of getting coffee. Personal assistants will work the same way. They will advise us without us having to intentionally query them. They will help in the right way at the right time, just like a personal mentor would.”

AI profile:

Number of data scientists/engineers specifically working on AI — 4

Cost for cloud — not disclosing

Number of clients/users — not disclosing

Funding — undisclosed amount backed by Rothenberg Ventures, Super Ventures, Presence Capital and others.

Founded in 2013

--

--

BuzzRobot
BuzzRobot

BuzzRobot is a communications company founded by OpenAI alumni that specializes in storytelling for AI startups.