## Diving Into a DNNC | Towards AI

# Deep Learning with DNN Compiler

## PART-1

# What’s DNN Compiler?

Deep Neural Network Compiler is an AOT Compiler and inference framework. Getting started is easy since it has only two objects.

**tensors |**🔳🔲**operators**| ➕➖✖➗

# So what is a tensor?

Tensor is simply an **array of numbers** with the ability to transform constrained by algebraic rules. A tensor is defined by 3 parameters — **datatype**, **rank, **and **shape**.

**Rank: **Dimensionality of a tensor is called rank. In the picture below, you see 1D, 2D, 3D, and 4D tensors.

**Shape:** The shape of Tensor is defined by the number of rows, columns, depth, and height, etc. In the picture below, the shape of a 1D tensor is (3,), shape of a 2D tensor is (3,3) and shape of a 3D tensor is (3,3,3) and so on.

**Datatype: **Data type describes the data type used to represent basic elements. They range from 8-bit integer to 64-bit float and everything in between.

**So how do we create tensors?**

**ZERO Tensor:**

`>>> dc.zeros(3,3)`

[[0.000000 0.000000 0.000000]

[0.000000 0.000000 0.000000]

[0.000000 0.000000 0.000000]]

**ONE Tensor:**

`>>> a=dc.ones(2,2)`

>>> a

[[1.000000 1.000000]

[1.000000 1.000000]]

**Tensor from Python List:**

`>>> python_list=[[2,4,6],[1,3,5]]`

>>> a=dc.array(python_list)

>>> a

[[2.000000 4.000000 6.000000]

[1.000000 3.000000 5.000000]]

**Tensor rank:**

`>>> a.rank()`

2

**Tensor shape:**

`>>> a.shape()`

(2, 3)

There are a ton of other methods for advanced users. To get the list, wait a sec or hit a *<tab>* after *dot* on a tensor variable like ** a.<tab>**. See the table below:

# Operators

DNNC offers two different flavors of operators — pythonic operators and numpy like NN operators (with functions).

## Pythonic Operators

If you are familiar with Python, you’ve nothing to learn. Start using DNNC with python operators +, -, %, *, %, etc. with DNNC tensors. You can even mix and match python scalars with DNNC operators. There is nothing foreign about using DNNC operators if you are using python. For example:

`>>> python_list=[[2,4,6],[1,3,5]]`

>>> a=dc.array(python_list)

>>> a

[[2.000000 4.000000 6.000000]

[1.000000 3.000000 5.000000]]

>>> **y=a+2; **# <<<-----------------

>>> y

[[4.000000 6.000000 8.000000]

[3.000000 5.000000 7.000000]]

In the snippet above, notice last addition **y=a+2**, where a is a ** 2x3 tensor** and 2 is a

**. Also, a has type**

*scalar***and 2 is an**

*float***. DNNC works hard behind the scenes to understand user intent and uses an implicit conversion engine and broadcasting mechanism to carry out the operation.**

*int*DNNC supports every operator python offers, so users can rest their trust in DNNC and focus on their machine learning algorithm.

## NN operators

NN operators offer the functionality of complex machine learning algorithms like Convolution with bias, Gemm, etc. The interface of NN operators is as simple as using a package like Numpy. NN operators offer an implicit conversion engine and broadcasting mechanism to enhance positive user experience. NN operators make development faster with quick help shown in all Python IDEs (like PyCharm, Visual Studio etc) and note-books (jupyter, colab).

DNNC supports over 140 of NN operators and fully compliant with ONNX 3.0, release 1.5. Here is a partial list of the ones already part of DNNC.

**How do I try it out?**

One can start using this google colab notebook with no installation or download and install this open-source compiler and framework at github.com/ai-techsystems/dnnCompiler to start contributing.

# One More Thing! ☝

I wanted to keep your excitement reserved for the next post talking about the performance and scalability of DNNC. You can deploy your deep learning models with unparalleled performance to tiny devices like **raspberry pi, odroid, Arduino and microcontrollers**. And using it is as easy as using its python interface.