Preferred Networks Unveils Dedicated DL Processor MN-Core

Synced
SyncedReview
Published in
3 min readDec 26, 2018

Japanese AI startup Preferred Networks (PFN) has developed a new processor dedicated to deep learning. The company unveiled the MN-Core chip, board, and server last week at SEMICON Japan 2018 in Tokyo.

PFN’s Chainer open source deep learning framework and powerful GPU clusters MN-1 and MN-1b currently support the company’s research and development activities. These clusters can be used with innovative software to conduct large-scale distributed deep learning to accelerate R&D in autonomous driving, intelligent robots, cancer diagnosis and other areas.

To speed up the deep learning training phase, PFN’s new MN-Core chip is optimized for performing matrix operations. Floating-point operations per second per watt is currently one of the most important benchmarks to consider when developing a chip, and the MN-Core is expected to achieve top-class performance per watt of 1 TFLOPS/W (half precision). By focusing on minimal functionalities, the dedicated chip can boost deep learning performance while also reducing costs.

MN-Core chip specs:
- Fabrication Process : TSMC 12nm
- Estimated power consumption (W) : 500
- Peak performance (TFLOPS) : 32.8(DP) / 131(SP) / 524 (HP)
- Estimated performance per watt (TFLOPS / W) : 0.066 (DP)/ 0.26(SP) / 1.0(HP)

(Notes) DP: double precision, SP: single precision, HP: half precision

PFN sees further improvements in the accuracy and computation speed of pre-trained deep learning models as essential for tackling complex, unsolved problems. To increase computing resources and make them more efficient, PFN is building a new MN-3 large-scale cluster, which will be loaded with MN-Cores and contain more than 1,000 dedicated server nodes. The MN-3 is slated to be operational by the spring of 2020.

PFN also will advance development of its Chainer deep learning framework to enable MN-Core to be selected as a backend, further accelerating distributed deep learning performance.

By leveraging the upcoming MN-3 clusters, PFN hopes to boost computation speed to a target of 2 EFLOPS.

The PFN research group was led by Kobe University Prof. Junichiro Makino in cooperation with the New Energy and Industrial Technology Development Organization of Japan (NEDO).

(Information provided by Preferred Networks)

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Follow us on Twitter @Synced_Global for daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global