Articles 8/24

FTI
FTI
Aug 24, 2017 · 8 min read

Please highlight in bold the most important parts of the below articles:

Article #1

China AI startup Cambricon receives new funding of US$100 million

China-based AI chip startup Cambricon Technologies has newly raised US$100 million in series A round funding to support its development of advanced AI chips, which is expected to help turn the company into a tech unicorn with over US$1 billion in valuation in the near future, according to company sources.

The funding was led by SDIC Chuangye Investment Management, a subsidiary of China’s State Development and Investment Corporation, with prominent investors including Alibaba, Lenovo, robot maker Zhongke Tuling Century Beijing Technology and the investment arm of the Chinese Academy of Sciences (CAS), among others.

Chen Tianshi, founder and CEO of the company, said while the company has utilized the funds raised in a pre-A round in 2016 to industrialize its Cambricon-1A chip as the first chip dedicated for high-performance neural network applications, the new proceeds will be used to support deployments of more advanced AI chip products and processors for cloud and terminals, and development of standard AI instruction set architectures.

Last year alone, Cambricon received a total of CNY100 million (US$15 million) worth of licensing orders for the IP instruction set of Cambricon-1A chips from makers of smartphones, security devices, and wearable devices. Huawei’s HiSilicon Kirin 970 chipset, to be rolled out in September, is also reportedly to carry Cambricon-1A chip instruction set.

Commercial production of AI chips through TSMC

In addition, Cambricon is negotiating with Taiwan Semiconductor Manufacturing Company (TSMC) for fabricating its AI chips using 14nm process, with official production likely to start in one year. Market observers said that whether Cambricon’s AI chips can be successfully fabricated by TSMC will determine whether the China company can smoothly transition from the licensing of IP instruction set to the commercialization of AI chips.

As a startup incubated by the Chinese Academy of Science, Cambricon also bears a mission of developing into a unicorn in China’s AI chip sector, observers said. At the moment, GPU (graphics processing unit) chips boast wide applications in deep neural network training areas, but many tech heavyweights and startups have remained active developing more applications. Among them, Intel released last year its Nervana AI processor that can accelerate various neural networks, Google also launched its TPU (tensor processing unit) chips to expedite deep neural networks, while Micrsosoft, AMD and Baidu have also joined the arena.

In response, Cambricon’s Chen said that GPU chips currently constitute the mainstream AI computing platform, but their basic architecture is not designed for AI application, leading to its limited computing efficiency for AI. He stressed the ideal AI chip should be a brand-new processor that can handle multimode capabilities of processing voice, speech, images, videos, and natural languages, among others, and should also boast much higher operating efficiency than CPU and GPU.

To achieve this, a brand-new AI instruction set is badly needed to support agile applications of diverse algorithms on AI chips, which will be a sharp contrast to the past scenario: software and hardware ecosystems all built on ARM and x86 instruction sets, according to Chen.

Boasting the world’s largest AI application market, Chen stressed, China is very likely to significantly affect the international AI ecosystem. In this regard, it is the most crucial job for China to develop, on its own, core AI instruction sets and make them become international standards for AI applications.
____

Article #2

Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms

Artificial intelligence labs race to develop processors that are bigger, faster, stronger.
With major companies rolling out AI chips and smaller startups nipping at their heels, there’s no denying that the future of artificial intelligence is indeed already upon us. While each boasts slightly different features, they’re all striving to provide ease of use, speed, and versatility. Manufacturers are demonstrating more adaptability than ever before, and are rapidly developing new versions to meet a growing demand.

In a marketplace that promises to do nothing but grow, these four are braced for impact.

Qualcomm Neural Processing Engine
The Verge reports that Qualcomm’s processors account for approximately 40% of the mobile market, so their entry into the AI game is no surprise. They’re taking a slightly different approach though — adapt existing technology that utilizes Qualcomm’s strengths. They’ve developed a Neural Processing Engine, which is an SDK that allows develops to optimize apps to run different AI applications on Snapdragon 600 and 800 processors. Ultimately, this integration means greater efficiency.

Image courtesy of Qualcomm.

Facebook has already begun using its SDK to speed up augmented reality filters within the mobile app. Qualcomm’s website says that it may also be used to help a device’s camera recognize objects and detect object for better shot composition, as well as make on-device post-processing beautification possible. They also promise more capabilities via the virtual voice assistant, and assure users of the broad market applications — ”from healthcare to security, on myriad mobile and embedded devices,” they write. They also boast superior malware protection.

“It allows you to choose your core of choice relative to the power performance profile you want for your user,” said Gary Brotman, Qualcomm head of AI and machine learning.

Qualcomm’s SDK works with popular AI frameworks, including Tensor Flow, Caffe, and Caffe2.

Google Cloud TPU
Google’s AI chip showed up relatively early to the AI game, disrupting what had been a pretty singular marketplace. And Google’s got no plans to sell the processor, instead distributing it via a new cloud service from which anyone can build and operate software via the internet that utilizes hundreds of processors packed into Google data centers, reports Wired.

The chip, called TPU 2.0 or Cloud TPU, is a followup to the initial processor that brought Google’s AI services to fruition, though it can be used to train neural networks and not just run them like its predecessor. Developers need to learn a different way of building neural networks since it is designed for Tensorflow, but they expect — given that the chip’s affordability — that users will comply. Google has mentioned that researchers who share their research with the greater public will receive access for free.

Image courtesy of Google.

Jeff Dean, who leads the AI lab Google Brain, says that the chip was needed to train with greater efficiency. It can handle 180 trillion floating point operations per second. Several chips connect to form a pod, that offers 11,500 teraflops of computing power, which means that it takes only six hours to train 32 CPU boards on a portion of a pod — previously, it took a full day.

Intel Movidius Neural Compute Stick
Intel offers an AI chip via the Movidius Neural Compute Stick, which is a USB 3.0 device with a specialized vision processing unit. It’s meant to complement the Xeon and Xeon Phi, and costs only $79.

While it is optimized for vision applications, Intel says that it can handle a variety of DNN applications. They write, “Designed for product developers, researchers and makers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.”

Image courtesy of Movidius.

The stick is powered by a VPU like what you might find in smart security cameras, AI drones, and industrial equipment. It can be used with trained Caffe framework-based feed-forward Convolutional Neural Network or the user may choose another pre-trained network, Intel reports. The Movidius Neural Compute Stick supports Cnn profiling, prototyping, and tuning workflow, provides power and data over a single USB Type A port, does not require cloud connectivity, and runs multiple devices on the same platform.

From Raspberry Pi to PC, the Movidius Neural Compute Stick can be used with any USB 3.0 platform.

NVIDIA Tesla V100
NVIDIA was the first to get really serious about AI, but they’re even more serious now. Their new chip — the Tesla V100 is a data center GPU. Reportedly, it made enough of a stir that it caused NVIDIA’s shares to jump 17.8% on the day following the announcement.

Image courtesy of NVIDIA.

The chip stands apart in training, which typically requires multiplying matrices of data a single number at a time. Instead, the Volta GPU architecture multiplies rows and columns at once, which speeds up the AI training process.

With 640 Tensor Cores, Volta is five times faster than Pascal and reduces the training time from 18 hours to 7.4 and uses next generation high-speed interconnect technology which, according to the website, “enables more advanced model and data parallel approaches for strong scaling to achieve the absolute highest application performance.”

— —
Article #3

Sequoia China backs AI startup Hyper Anna in AU$16m round
The funding will be used for expansion into Asia and the US.

Artificial intelligence startup Hyper Anna has announced raising AU$16 million in a Series A round led by Sequoia China, with participation from Airtree Ventures, Westpac Reinventure, and IAG Firemark Ventures.

The latest round brings the total amount raised by the Sydney-based startup to AU$17.25 million since its founding in 2015.

The funding will be used to support expansion into international markets including China, Hong Kong, Singapore, and the US. An office in Hong Kong is already scheduled to open in September.

Founded by data scientists Natalie Nguyen, Kent Tian, and Sam Zheng, Hyper Anna’s core product is an artificially intelligent data scientist — touted the “Siri for analytics” — that sits on top of company databases, answering questions about business performance in natural language.

“The idea behind Anna is that all businesses, regardless of scope or size, deserve to have access to data scientists to drive value from the data they create and own. With a shortage of talent in the market Hyper Anna allows customers to scale their data analysis requirements in a very efficient manner,” Nguyen said in a statement.

Users can ask questions in plain English such as, “How are my sales doing?” and Anna will provide relevant insights.

“The intention is to track sales against time, and just like a human, Anna picks up on those nuances,” Nguyen said.

“Intent is something that humans have a knack for intrinsically understanding, but something that machines have traditionally struggled with.”

Are you making time to innovate? You should be.
The formula for achieving business success in todays always-on, always-evolving business environment begins and ends with innovation.
Learn why theres no better time than the present to put innovation on the IT agenda in this informative article.
White Papers provided by Windstream Communications
Eventually, users will be able to ask questions in multiple languages using voice, text, and email, and interact with any collection of data on any device.

Data held in applications such as Salesforce and Google Analytics can be accessed using APIs, while other data required for analysis can be fed into the system on an as-needed basis.

“The brain of Anna constantly needs to reap feeds of data so it gets smarter — so having a central place for the brain to sit is extremely important and also helps with deployment time. We deploy literally within a day with Azure,” Nguyen previously said.

)
Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade