Welcome to the Mythic Technical Blog

Mythic
Mythic
Published in
4 min readJun 7, 2018

A Few Introductions

  1. What is Mythic? We are a young company building a new chip dedicated to AI. Our chip combines incredible advances in hardware and software to create massive computation and low power solutions for neural network inference.
  2. What is this Blog? While on this journey to create this new technology, our hardware and software the various teams here use, leverage, invent, and run up against all sorts of interesting technical concepts, ideas, and solutions. These run the gamut from:
    - Cool new C++ 17 features
    - New ways to train neural networks
    - Advanced simulation techniques
    - Custom compilers designed to optimize neural networks
    - DNN code refactoring to be reusable and componentized
    - Sanity-preserving Jira workflows
    - Fun with graph theory!
    - How to build and modify compilers for fun and profit!
    - And much, much more…
  3. Who am I? I’m Tyler Kohn, Head of Platform Technology. Basically that just means I try to help our incredible software and deep learning teams build amazing things.

Why “Technical”?

Often these blogs are called “Engineering” or “Research.”. Here at Mythic, we have a number of disciplines that are all related to technical concepts or technical processes. Many are engineering-focused in nature but there’s plenty of science, research, process (e.g., ISO9000 ), operations, and manufacturing — We’re excited to start a conversation on the all of things that go into great product and technology.

Some Mythic Context

To frame some of the things you’ll see us post about, I’ll try to provide some high level context around what we are doing and touch briefly on technical details.

The Deep Learning Resource Challenge

The math behind deep neural networks (DNNs) is pretty simple when you boil it down to the basic operations. The operations are simple but there are LOTS of ’em. For inference, trying to optimize hardware resource utilization for massive amounts of these basic operations introduces complexity.

For example, in a particular DNN you may have learned tens of millions or more weights during training. During inference those weights are often used in matrix multiplications many times. The volume of these values exceed the storage capacity of a digital microprocessor (GPU or CPU) in SRAM or other on-chip storage. Thus, these values are stored off chip. Then, they are moved in and out of the chip as needed to perform operations. This movement of data consumes an incredible amount of time and electricity. Time and power are critical use cases such as autonomous cars. Every millisecond counts and can save lives when detecting pedestrians. Some electric cars using existing technology for self driving capabilities use over a third of their batteries just to power computation, and some use a gas generator just to power all the DNN operations!

The Mythic chip takes a fundamentally different approach to this challenge. We use a mixed-signal computing approach that combines analog and digital components to compute the operations needed for a DNN.

Mythic’s chip gains a huge advantage by storing weights on flash transistors and performing computations in an analog component. Analog components can perform arithmetic operations, such as addition and multiplication, in a small fraction of the area and power of digital components by taking advantage of physical properties of the electronic components within the chip. Additionally, the system no longer needs to move those weights from a central memory location into the processing unit. These advantages create a massive time and power savings. This enables the Mythic chip to act as a stand alone coprocessor that can take data streaming in, perform inference and stream results out without needing to leverage, block, or wait on system resources.

The Software Side of Things

On the software side, we need to ensure usability. This means users need to be able to use existing frameworks (e.g., Tensorflow, Caffe, PyTorch, etc.) for building and training DNNs. Also, our users can’t be restricted to a particular type of network, or types of layers, or the specific orchestration of sub-networks. To keep it simple for the users, the Mythic team needs to get a bit complex. To take an arbitrary DNN and enable it to run on this chip, the Mythic compiler needs to be able to decompose the DNN and optimize it as it compiles a binary that will run on the chip.

To do this, we created a new kind of compiler that:

  • Takes advantage of the deterministic nature of inference
  • Embeds a Data Flow Simulator into the Compiler toolchain
  • Reduces many of the challenges into graph analysis and numerical optimization

Putting it Together and See You Soon

Building this combination of hardware and software requires some very cool technology, process, infrastructure, tools, and development approaches. The goal of this blog is to let our folks start a conversation with you. We’ll share how we are learning how to solve interesting problems at Mythic and dive into all sort of related technical concepts along the way. We hope you learn something useful, and we’d love to hear your feedback.

I hope you’ll be back to visit as we start to fill up this space!

-Tyler

--

--