An Age Of Ambition: The Burgeoning Chinese AI Market
During the 2016 Global Artificial Intelligence and Robotics Summit (CCF-GAIR), representatives from Huawei Noah’s Ark Lab, Xiaomi Explore Lab, Tencent Youtu Lab, 360 AI Institute， Baidu Automatic Driving Department, MicrosoftResearch Asia, Department of Computer Science at the University of Oxford, National Robotics Engineering Center at Carnegie Mellon University, MIT Biomimetics Robotics Lab and Royal Roads University all joined and shared their understanding of the future of AI technologies.
During the National Science and Technology Innovation Conference held in May, 2016, Ren Zhengfei, founder and president of Huawei Technology, delivered a speech about Huawei entering into untapped waters and losing the direction, which was widely circulated in the Chinese tech circle.
“From the aspect of technological development, human society will gradually evolve smarter and smarter in the next two or three decades in scales that is unimaginable,” Mr. Ren said.
During the 2016 Global Artificial Intelligence and Robotics Summit (CCF-GAIR) held by China Computer Federation and Leiphone.com on August, 12nd, 2016, a series of Chinese internet giants presented their latest progress in artificial intelligence and robotic technologies as well as projects.
Representatives from Huawei Noah’s Ark Lab, Xiaomi Explore Lab, Tencent Youtu Lab, 360 AI Institute， Baidu Automatic Driving Department, MicrosoftResearch Asia, Department of Computer Science at the University of Oxford, National Robotics Engineering Center at Carnegie Mellon University, MIT Biomimetics Robotics Lab and Royal Roads University all joined and shared their understanding of the future of AI technologies.
The trend towards AI is unstoppable
The year 2016 marked not only the 60th anniversary of AI, the imagination-stirring AlphaGo match, but also the entry of AI technologies from academic into industrial field.
Two days before the opening of CCF-GAIR, it was revealed that Intel acquired the AI startup Nervana Systems with $35 million. Report suggests that the quality- price ratio of Nervana Systems’ deep learning chip is even higher than that of GPU, and that its processing speed is 10 times that of GPU’s. Some analysts exclaimed that chip giants Intel and NVIDIA are directly at war.
In early August, NVIDIA released its Q2 financial report, which suggests that its revenue increased by 24 per cent year-on-year, and its net profit rose by a staggering 873 per cent year-on-year. Such growth was made possible as internet companies began to adopt NVIDIA’s GPU in large scales to complete AI-relevant computing tasks. NVIDIA’s GPU, known for its massive parallel processing ability, is the top priority for AI-related computing tasks. In July, 2016, NVIDIA debuted its first deep learning server DGX-1 and officially put it on the market.
In face of NVIDIA’s competition in the data processing market, Intel will certainly not sit by idly and do nothing about it. During the International Supercomputing Conference held this June, Intel previewed future “Knights Landing” processor Xeon Phi 2.0 and the 72-core x86-based processor, the first independent product of its kind, which meant that Intel could build a CPU-only, scalable machine learning group without GPU.
In mid-July, chip design giant ARM was acquired by SoftBank. With the financial support from SoftBank and its ecosystem, ARM wishes to grab the low-power smart IoT chip market before the industry booms. This February, ARM released its new processor structure design targeting mainly 5G modem and Mass Storage SoC Embedded Device, which paved the way for the spread of AI in the future. This May, ARM acquired computer vision technology provider Apical with $350 million.
The trend towards AI is obvious not only from the aspect of chip, but also from the aspect of application. On July, 28th, Chinese AI PaaS developer Turing Robot revealed that 12.6 billion requests were sent to its server in the past eight months, the number of its developers increased by 110 per cent to over 230,000. Also in July, Turing Robot added 11 kinds of computer vision abilities, including facial recognition, facial detection, facial tracking, to its smart robot operating system Turing OS1.5.
Besides Turing Robot, IBM brought its cognitive computing platform Watson to China through Bluemix PaaS. Besides, Google launched smart search engine and smart hardware Google Home during this year’s IO Conference. In addition, Alibaba Cloud also revealed that AI would also be one of its major areas in its future strategic development plan.
Thus, it is safe to arrive at the conclusion that the first wave of industried AI technology has already come.
Can Huawei build another “Noak’s Ark”?
As early as 2012, Huawei has already established Noah’s Ark Lab in Hong Kong. Mr. Yang Qiang, AI and data mining expert and a professor at Hong Kong University of Science and Technology, was appointed the director. The lab’s research interests include: natural language processing, information retrieval, large-scale data-mining and machine learning, social media and mobile smart technology, human-machine interaction system, machine learning theory, etc.
As a matter of fact, Noah’s Ark Lab is part of Huawei’s 2012 Research Lab Project. It is said that Huawei’s founder and CEO Ren Zhengfei came up with the name after watching the film 2012, believing that Huawei should be able to build its own “Noah’s Ark” in face of the information explosion in the future. The 2012 Research Lab’s research interests include: next generation of communication, cloud computing, audio and video analysis, data mining, machine learning, etc., and focuses mainly on technologies in the next five to ten years.
During the summit, the lab’s first director Professor Yang Qiang introduced to the audience that his major research field was transfer learning, that is, to apply “trained” AI model to real-life scenarios. At present, technology has already grown very mature to conduct deep learning through AI neural network models, but such models are closely related to original data. However, if applied to another scenario or data set, the model has to be “trained” again. The goal of transfer learning is exactly to make it easier to apply AI models in real scenarios.
In other words, transfer learning is about applying an exiting AI model to unknown scenarios. Similarly, in human society, you might be more familiar about the transfer of knowledge, such as applying the knowledge of how to ride bicycles to riding motorcycles. Transfer learning technology also enables applying AI models based on big data to small-scale data set, which makes AI models more customized.
Li Hang, the second director of Huawei Noah’s Ark Lab, adjunct professor at Peking University and Nanjing University, revealed that the lab also conducted big data and AI research over Huawei’s smartphones, servers, data center products and machines, and had contributed to the development of cutting-edge products and services for Huawei’s three major business groups, such as smart communication network, Enterprise BG’s big data application, Consumer BG’s smart voice assistant, etc.
Besides Noah’s Ark Lab, there are also some very well-known labs under Huawei 2012 Research Lab, named after well-known scientists, such as Shannon Lab, Gauss Lab, Shield Lab, Euler Lab, Turing Lab, etc. In addition, Huawei has also set up eight overseas basic research institute in Europe, India, the US, Russia, Canada, Japan, etc. It is reported that Huawei is going to establish ten more basic research institutes in Shenzhen in 2016.
Through these labs and basic institutes, Huawei attempts to explore the no-man’s land and lay a solid foundation for its future. Although investment in R&D takes up 15 to 20 per cent of Huawei’s annual revenue, Huawei’s basic research institutes are still quite new, especially compared to MicrosoftResearch’s 25-year-old and IBM’s 80-year-old counterpart labs and research centers.
At present, Huawei’s challenge is how to manage large-scale basic-level research institutes and labs.
Xiaomi’s exploration in the AI sector
Xiaomi is an even later entrant in the field. On January, 15th, 2016, Lei Jun, founder and CEO of Xiaomi, revealed on the annual meeting that Xiaomi was going to establish a special team focused on core technologies and core parts. Besides, Mr. Lei announced that Xiaomi was going to establish Xiaomi Explore Lab to develop cutting-edge technologies such as VR, robots, etc. This February, the lab was officially established.
Mr. Huang Jiangji revealed at the summit that Xiaomi’s primary focus was product, big data and machine learning. That is to say, Xiaomi attempts to develop cutting-edge technologies closely based on smartphones and smart hardware. He also demonstrated at the summit Xiaomi’s WiFi modular, and revealed that Xiaomi had lowered from 60 RMB to 10 RMB, which will certainly lay the foundation to Xiaomi’s relevant smart hardware and products in the future.
At present, over 200 TB of data, collected by Xiaomi’s WiFi modular, is updated to Xiaomi Cloud every day. Real AI system can only be created with huge amount of data. In the process, data from active users is equally important. Mr. Huang also revealed that the number of of MIUI’s apps with over 10 million daily active users has reached eight, and 17 MIUI’s apps have already gained over 1 million daily active users.
Xiaomi smartphones, Mi Band, Xiaomi TV and Box, Xiaomi Network Device, Mi Smart Home Device, along with Xiaomi’s e-commerce platform, interactive entertainment business, market and Xiaomi’s entire ecosystem are all sources of Xiaomi’s big data. With a pyramid-like data processing model, Xiaomi first collects data from the bottom, clean and mine data next, and achieve data intelligence at last.
Based on these data, Xiaomi can form high-quality portrait of its users. As a matter of fact, Xiaomi can even collect users’ data without users’ input, create standardized data, build a data pool consisting videos, music, purchase items, games, APPs, novels and news and get prepared for further search, recommendation, traffic distributing and manual operation service.
According to Mr. Huang, Xiaomi’s data processing technologies can be divided into four categories: fundamental technologies such as Hadoop platform, data factory; basic technologies such as machine learning (deep learning), visual recognition, NLP natural language processing and voice recognition; mid-level technologies such as business data, user portrait, data pool; advanced technologies such as smart business, search, recommendation, smart Q&A, etc. All these technologies will be applied to Xiaomi’s various hardware products at last.
Speaking of Xiaomi’s deep learning platform, Mr. Huang revealed that Xiaomi’s hardware was the GPU at public cloud and local data center, and that Xiaomi adopted Kubernetes and Docker to manage GPUs in group, TensorFlow to manage tasks, HBse/HDFS to store data and Spark/Storm/MR to complete computer service. All these equipment and systems enable Xiaomi to provide smart assistant, cloud photo album, ads, finance and search recommendation service.
It is Xiaomi’s belief that high-quality products will keep users and stimulate users to generate high-quality data. Next, Xiaomi will process these data and develop AI through machine learning. At last, Xiaomi will apply its AI technologies to product and design. For example, with over 150 million users and over 50 billion photos on Xiaomi Face Album, Xiaomi can better improve the service. Mr. Huang stressed that in the past, handy products were not necessarily interesting, while interesting products were not necessarily handy. With the development of AI and machine learning, however, he believed products could be both interesting and handy one day.
When asked about his opinion about smart products driven by AI technologies, he pointed out that when people still need to look at their smartphones numerous times every day, smartphones would still be not smart enough. Real smartphones should be able to reduce the time and frequency people look at them, because many tasks would be completed by smartphones without manual operations.
It is fair to say that although smartphones have reached a plateau, the era of real smartphones has only begun.
Other major players in the Chinese AI market
“The more uncertain the future is, the more innovation is needed, and the more opportunities for tens of millions of startups,” Ren Zhengfei said.
On April, 22nd, Sogou donated 180 million RMB to Tsinghua University and jointly established the Institute of Intelligence Computer, in hopes of further developing cutting-edge technologies such as AI. Yang Hongtao, CTO of Sogou, stated at the CCF-GAIR Summit that search engine is the biggest application scenario for AI technologies. Sogou has been developing intelligent speech technology since 2012, and deep learning technology since 2013. Statistics suggest that over 140 million speech input requests are sent to Sogou Mobile Input Assistant every day. As a matter of fact, the adoption of Sogou’s speech input technology, as a result, has significantly improved the user experience of mobile apps such as WeChat.
Li Lei, a scientist at TouTiao and head of TouTiao Lab, used to be a scientist at Baidu’s Deep Learning Lab in the US. At the summit, Mr. Li revealed that TouTiao was very adamant in the development of AI technologies. According to him, TouTiao Lab had already been established, just four years after the foundation of TouTiao. He added that TouTiao had been developing cutting-edge technologies long time ago, whether to meet the need of current businesses or get prepared for future ones. In addition, he explained that machine learning technology was crucial in linking content producers and content consumers, and TouTiao could even be referred to as an AI company.
Founded in 2012, Tencent’s machine learning R&D project Youtu Lab is primarily focused on photo processing, model recognition, deep learning, etc. At present, the lab has already owned ten cutting-edge technologies and photo computing ability to process over 100 billion photos. Mr. Huang Feiyue has been the director and expert researcher at the lab since its foundation. At the summit, Mr. Huang stated that it was Tencent’s wish to fundamentally improve people’s life quality with AI technology, and that Tencent was ready to share its latest progress through open platforms and Tencent Cloud.
Cheetah Mobile is also very much focused on AI technologies. This July, Cheetach officially announced that it attempted to transform from a safety and software company into an AI company by developing deep learning and personalized distribution technology. Although Cheetach did a very great job in internationalization, its stick price still dropped after it reached the bottleneck. In 2016, Cheetah acquired News Republic with $57 million in hopes of making room for new business with TouTiao’s method: internationalization. Cheetah’s CEO Fu Sheng believed that AI would bring about a new wave of growth in the post internet era.
Founded in September, 2015, Qiho 360 AI Institute is primarily focused on developing deep learning technologies, seizing the opportunity of big data and cloud computing, supporting relevant divisions of Qihoo 360 and accumulating experiences in frontier areas. After AlphaGo match drew to an end, Qihoo 360’s president and CEO Zhou Hongyi sent an internal letter to all employees, stating that it was only a matter of time for AI technologies, services and products to be spread to the general public. Yan Shuicheng, director of Qihoo 360 Institute, pointed out that AI brought together the academic and industrial circle.
Baidu has already been investing in AI sector a great deal. Its drone project attracted huge attention since its debut at the Second World Internet Conference held in Wuzhen. Baidu also plans to mass-manufacture unmanned vehicles within the next five years. Wang Jin, senior vice president of Baidu and general manager of Baidu Automatic Driving Business Group, revealed at the summit that Baidu purchased an X-ray radar last December with 700,000 RMB, and that Baidu’s Automatic Driving System, also referred to as “Baidu Vehicle Brain”, will cost over 200,000 RMB. As a matter of fact, Baidu has already reached an agreement with Velodyne LiDAR, and the X-ray radar giant has already promised that it would lower the unit price of 64-line X-ray radar to $500 if Baidu’s order volume reaches over 1 million next year, which mass-removed the last obstacle for the mass-manufacture of Baidu’s nomanned driving vehicles.
Although LeTV poached Ni Kai, former senior scientist at Baidu Deep Learning Institite and former head of Baidu Unmanned Driving Porject, and appointed him head of LeTV’s super automobile project, Mr. Ni revealed at the summit that he’s actually not in charge of the super automobile project. According to him, LeTV has joined Faraday and Future and established FF Le Future Lab, whose primary focus is supporting LeTV’s smartphone, sports, automobile business with AI technology. Besides the R&D center in the Silicon Valley, LeTV also plans to establish another center in Beijing.
Besides internet giants and IT companies, private-owned companies are also eyeing on the industry, and some of them even see it as a golden opportunity and believe the market is worth over 100 million RMB. For example, Dahua Technology, provider of high-value, total security solutions, ranks the second around the world in the security video monitoring market, according to IHS. Its production value rose to 100 billion RMB in 2015. Next, its goal is to further raise the production value to 100 billion RMB. To achieve this goals, it has already established its own Internet of Vision (IoV) brand LeChange, as well as an open platform featuring smart hardware, cloud and IoV technology.
A glimpse of the future
During the CCF-GAIR 2016 Summit, international AI experts including Dean of the Department of Computer Science of University of Oxford, President of the Department of Engineering of the University of Pennsylvania, and Director of MIT Robotic Lab shared the latest progress in the AI sector with the Chinese AI circle. Michael Wooldridge, Dean of the Department of Computer Science of the University of Oxford and head of Oxford-DeepMind Partnership, stated that AI technologies had already been able to solve complicated problems such as chess, SAT and automatic driving. In the future, AI technologies would be able to achieve real-time oral understanding, bicycle riding, reliable interpretation and translation, understanding of and responding to complicated stories, making up jokes and interesting stories, explain the meaning of a picture, etc.
Mr. Michael Wooldridge believed that it was still too early to talk about the perfect blend of AI technologies and human intelligence. For him, it was even possible that that time would never come. Although scientists have made enormous progress with AlphaGo, there are still lots of unsolved problems. For example, AlphaGo don’t “know” that it’s playing a game and can’t explain its game strategy. Basically, AlphaGo is like a black box. It can be viewed in terms of its inputs and outputs, but we have no knowledge of its internal workings. That is to say, AlphaGo can’t be developed into a general-purpose product.
He also introduced to the audience that his research filed was mainly the “Multi-Agent System”, that is, to integrate existing AI service and algorithms in narrow task areas. At present, AI technologies are applied in various products and services, which separates and segments AI services a great deal. In this case, it’s the next goal of scientists to integrate AI services in all these systems and products, so that people might be able to “communicate” with other people’s AI system through their own smartphones and then set up the schedule for a meeting that everybody can attend, for example.
Vijay Kumar, a member of US Academy of Engineering and President of the Department of Engineering of the University of Pennsylvania, is regarded as the pioneer in drone sector. His students can be found in major drone makers around the world. He pointed out in the speech that research into drones and aerial robots can help accumulate data for better studying and understanding the behavior of robots. Mr. Kumar also shared with the audience his latest findings about “The Hive Effect”:
When low intelligence species such as ants, birds, fish gather and work together, they can accomplish astonishingly complicated project. Likewise, when small-scale and micro drones cooperate with each other, they may also be able to complete complicated computing tasks.
As a matter of fact, his theory might also be applied to ground and undersea robots.
MIT has always been one of the pioneers in the study of robotic technology. Daniela Rus, a member of the US Academy of Engineering and director of MIT Robot Lab believed that everybody would have his or her own robot in the future. At that time, robots would be as ordinary as automobiles. For example, some robots might look like snakes or fish, other robots might deal with some specific and practical matters, such as dicing into people’s guts and pulling fish bones out.
Sun Yu, a professor at the Department of Engineering of the University of Florida and an expert in robotics and deep learning, is known for his study in the “black technology”: machine hands. As a matter of fact, they are on par with even human hands. According to Mr. Sun, a quarter of the 206 human bones are in hands and human hands are the most complicated organs in human body. He added that the difference between robotic and computing intelligence was that robots had to interact with the real world, while computers didn’t need to, and that the study of machine hands was important because it fell into the former category.
Professor Yang Qiang believed transfer learning, which makes it easier to apply existing machine learning systems to new scenarios, would be one of the major trends in the next stage. He also mentioned that the Chinese machine learning and AI technology circle were still not as balanced as abroad, and that AI technology was not limited to deep learning, but much more.
Zhou Zhihua, deputy dean of the Department of Computer Science of Nanjing University and director of LAMDA, predicted that increasing the robustness of machine learning would be the next trend in the machine learning sector. According to him, lots of machine learning systems could achieve human intelligence at present in common situations. However, in extreme circumstances, machine learning systems can be way off base. In this sense, adjusting machine learning systems and making sure that they function well even in extreme circumstances should be the prerequisite before spreading machine learning technology more broadly.
Still, the development of AI and robotic industry relies highly on entrepreneur communities. Wang Tianmiao, Chair Professor and Yangtsz River Scholar for the Ministry of Education, predicted that within the next five years, robots will play a key role first of all in three sectors, industry, service as well as smart vehicles and drones. With the mature of AI technologies, robots will play a more important role in banks, houses, hospitals, hotels, etc. Zhang Quanling, a partner at Ziniu Ventures, stressed that the business model centered around AI technologies couldn’t be come up with by sitting in the lab. Zhang Hongjiang, CEO of Kingsoft Software and Kingsoft Cloud, said that Chinese AI industry was developing rapidly, and that the gap between the Chinese and American AI companies were narrowing.
Minieye is a startup project originated from Singaporean government’s project ADAS. Liu Guoqing, CEO of Minieye, believed that the division of labor would be very profound in the era of AI. For him, every supplier would have its own focus, and would excel in one or a couple of areas. At that time, smart automobile makers would look more like integrators of various parts and couldn’t do everything by themselves, just like PC makers today.
No many how determined internet giants are in the AI field, division of labour on the basis of specialization will be the dominant rule in the AI smart ecosystem.
[The article is published and edited with authorization from the author @Wu Ningchuan please note source and hyperlink when reproduce.]
Translated by Levin Feng (Senior Translator at PAGE TO PAGE), working for TMTpost.
Originally published at www.tmtpost.com on August 16, 2016.