Preferred Networks Builds State-Of-The-Art Supercomputer Mn-2 Powered With Nvidia GPUs

Synced
SyncedReview
Published in
3 min readMar 22, 2019

Preferred Networks (PFN) is completing a new private supercomputer, MN-2, which the Japanese AI startup expects to have operational in July 2019.

MN-2 is a cutting-edge multi-node GPGPU*3 computing platform using NVIDIA V100 Tensor Core GPUs. Combined with two other PFN private supercomputers — MN-1, in operation since September 2017; and MN-1b, in operation since July 2018 — Mn-2 will provide PFN with total computing resources of about 200 PetaFLOPS. PFN also plans to start operating MN-3, a private supercomputer with PFN’s proprietary deep learning processor MN-Core, in spring 2020.

PFN believes investing in computing resources will help it accelerate practical R&D applications in deep learning technologies and establish a competitive edge in the global development race.

Conceptual image of MN-2

PFN next-generation private supercomputer MN-2 outline

PFN’s private supercomputer MN-2 is equipped with 5,760 of the latest CPU cores and 1,024 NVIDIA V100 Tensor Core GPUs. Built at the Yokohama Institute for Earth Sciences, Japan Agency for Marine-Earth Science and Technology, MN-2 can theoretically perform about 128 PetaFLOPS in mixed precision calculations (a method used in deep learning), giving MN-2 more than double the peak performance of MN-1b.

Each MN-2 node has four 100-gigabit Ethernets, in conjunction with the adoption of RoCEv2*4, to interconnect with other GPU nodes. The uniquely tuned interconnect realizes high-speed, multi-node processing. PFN will concurrently self-build software-defined storage*5 with a total capacity of over 10PB and optimize data access in machine learning to speed up the training process.

PFN will utilize the open-source deep learning framework Chainer on MN-2 to accelerate R&D in fields that require massive computing resources such as personal robots, transportation systems, manufacturing, bio/healthcare, sports, and creative industries.

*1: The figure for MN-1 is the total PetaFLOPS in half precision. For MN-1b and MN-2, the figures are PetaFLOPS in mixed precisions. Mixed precisions are the combined use of more than one precision formats of floating-point operations.

“High computational power is one of the major pillars of deep learning R&D. We are confident that the MN-2 with 1,024 NVIDIA V100s will further accelerate our R&D,” says PRN Corporate Officer, VP of Systems Takuya Akiba.

Says NVIDIA Japan Country Manager and Vice President of Corporate Sales Masataka Osaki: “NVIDIA is truly honored that Preferred Networks has chosen NVIDIA Tesla V100 SMX2 for the MN-2, in addition to the currently operating MN-1 and MN-1b, also powered with our cutting-edge GPUs for data centers. We anticipate that the MN-2, accelerated by NVIDIA’s flagship product with high-speed GPU interconnect NVLink, will spur R&D of deep learning technologies and produce world-leading solutions.”

Preferred Networks was founded in 2014 and promotes business utilization of deep learning technology focused on IoT and Edge Heavy Computing in transportation, manufacturing and bio/healthcare. The company developed the open source deep learning framework Chainer and has collaborated with organizations such as Toyota Motor Corporation, Fanuc Corporation and the National Cancer Center of Japan.

(Information provided by Preferred Networks. For inquiries contact PFN at pfn-pr@preferred.jp)

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Follow us on Twitter @Synced_Global for daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global