Pilot AI Labs: Computer Vision in Integrated Devices

Sherry Ruan
Feb 23, 2017 · 6 min read

With the large-scale access of terminal devices to the Internet, the massive amount of data produced on terminal devices not only provides commercial value, but also presents a challenge to data processing. As there exists a need for real-time responses despite limited networking bandwidth, edge computing has become a new development trend.

What happens when edge computing meets deep learning?

There are several natural advantages of performing deep learning on integrated devices. Firstly, integrated devices can perform direct computation even without a network connection. Secondly, performing computations directly on integrated devices avoids the problem of delay, allowing one to not worry about delay arising due to network transmissions. Thirdly, storing data on terminals solves most privacy concerns.

Of course, generally, only the inference portion is done on the terminal device at this point in time. The training portion can still be completed on the cloud.

However, as the computing power and bandwidth of a terminal device is limited, special deep learning algorithms are required. We recently interviewed Pilot AI Labs, a startup in the field of computer vision. They are the ones who have provided a computer vision solution for integrated devices based on deep learning.

The founding team of Pilot AI Labs

Jonathan Su, the co-founder and Chief Executive Officer of Pilot AI Labs, has founded many startups. He holds a PhD in Computer Science from Stanford University, and has a wealth of expertise in the fields of data optimization and high performance computing.

He was the co-founder and Chief Executive Officer of the fashion laboratary, PhiSix, which was sold to eBay in 2014. He served as the VP of Engineering of eBay, and was a senior data scientist as MetaMind.

The members of the founding team of Pilot AI Labs originate from Stanford and MetaMind. Jonathan and other core members are colleagues, classmates, and their team even includes roommates of many years.

Focusing on computer vision in integrated devices

Currently, Pilot AI Labs has a team of approximately thirty people. Danhua Capital led the investment for the seed round, while NEA led the investment for the Series A round.

Pilot AI Labs focuses on building a computer vision platform based on deep learning. This platform has already been optimized, and can run real time on integrated devices. They have chosen to be a startup in the field of deep learning since the startup was founded, and have clearly chosen vision as their startup direction. This is in part due to their substantial experience in this field, as well as because there already exists a fundamental level of research and application of computer vision in academia and industry.

Pilot AI Labs hopes to incorporate computer vision onto the integrated chip for direct use on many small cameras. Pilot.ai can complete deep learning algorithms and deployment on GPU devices (such as the ARM), and this can be applied on a wide scale or retro-fitted on all-in-one devices with limited computation power, such as UAVs, VR/AR devices, security cameras, sports cameras, and many others. With this, Pilot.ai greatly reduces the barrier of entry to its use. Even without high-performance hardware, customers can directly integrate Pilot modules onto their own devices, which gives them a great marketing advantage.

Pilot AI Labs has deep neural network algorithms which have been specially shrunk, which is distinct in meaning from the usual pruning of a neural network. A shrunk neural network can function in an environment with a greatly restricted computing power and bandwidth. Moreover, they also have rather unique data training methods, with maximizes the use of training data.

They have even developed their own frameworks and tools for deep learning, which can be generally applied to the various fields of computer vision. Their solution has also incorporated traditional methods of computer vision.

Using UAVs as a point of entry

The strengths of Pilot AI Labs’ technology are tracking and detecting. After evaluating the various potential applications of computer vision, they finally decided to use UAVs as a point of entry, using vision to achieve functions such as automatic tracking.

Pilot.ai will reduce the barriers of entry for UAVs performing visual tracking. For manufacturers of UAVs, directly purchasing chips integrated with Pilot.ai algorithms enables them to obtain its abilities of computer vision and identification without the use of GPS tracking.

Their solution is able to achieve decent tracking even in environments with many people.

Currently, Pilot AI Labs possesses the rare UAV vision solution based on deep learning that is functional. They have already received orders exceeding US$8 million, and their solution is already being widely used in the industry.

Expanding to other industries

The current applications of Pilot AI Labs also include the analysis of store traffic in the retail industry, among other applications.

Another scenario in which it has already been utilized in is for assisted driving and for the detection of fatigue. For example, a camera is installed on the front windshield of a car in order to determine the condition of the driver and passengers. This can be used on shared ride services, such as Uber.

Currently, they have already achieved profits within the first year of startup. They are also preparing to enter industries such as mobile devices, security, smart homes, and industrial automation. Their target is to achieve an annual income of US$100 million.

Making computer vision ubiquitous

Vision is vital for our understanding of the world. Approximately 70% of the activity in a human’s cerebral cortex is used to process sight-related information. Vision is the front door to the human brain.

Jonathan has indicated to Silicon Valley Insight that they wish for their solutions to achieve the ability to be applied in universal cameras that are easily found in the physical world, allowing any camera to be able to actively identify its surrounding environment, to be used in our daily lives and to improve work efficiency.

Pilot.ai using a camera to make a real-time distance estimation

In many situations, many problems can be solved just by using universal cameras. Of course, their effects are magnified with the integration of even more sensors.

Pilot.ai using a camera for real-time positioning

Pilot AI Labs currently focuses on applying computer vision to integrated and mobile devices, greatly cultivating this field and possessing an early advantage. Now, more and more large companies have taken notice of this field, and Jonathan has indicated that they will work with even more companies to develop this field in the future. Currently, in the United States, Pilot AI Labs is collaborating with a large brand of consumer electronics, using its chain of retail outlets to enter various large supermarkets.

In the long term, computer vision will replace human “sight” in an increasing number of situations, freeing up large amounts of productivity, time, and effort. In the future, it will be used in an increasing number of fields, making computer vision a promising market.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade