Introducing NimTorch

Fragcolor
Fragcolor
Sep 24, 2018 · 3 min read

PyTorch — Python + Nim

When it comes to machine learning frameworks you have a lot of choice. Your choice will influence prototyping speed, performance and ease of deployment. But you are mostly stuck with Python.

Python is great: Iteration is fast, the ecosystem is big, and everybody knows it. You will find great communities like PyTorch.

But it comes with a number of issues that are not solved so far.

  • Python is interpreted. It cannot keep up with the raw performance of native applications. Our ultimate goal is to integrate machine-learning and inference in real-time applications.
  • Deployment often means porting research into native code, a second incompatible framework or using convoluted inference platforms. We would like a portable solution, that scales to a cloud of GPUs, works on mobile devices or your browser.
  • Interoperability with native libraries and ecosystems is near impossible.
  • Python based frameworks are typically a wrapper with a native backend. This prevents extensibility and since you end up with both the ugliness of C++ and an untyped script language, maintenance cost explodes.

NimTorch is our effort to address this.

Jump to the repository: https://github.com/fragcolor-xyz/nimtorch

Nim to the rescue

You can’t have the productivity of Python and the performance of C. Or can you?

Enter Nim, a native language, that compiles to C/C++, with Python-like syntax and all the productivity tools you can wish for.

  • Terrific meta-programming and type systems
  • Creating and running a *.nim is just as fast as a *.py

NimTorch aims to give you all the above benefits, while making you feel comfortably at home. Here is what a simple training loop looks in both frameworks:

NimTorch

PyTorch

Not feeling like writing all those nasty characters? This works too:

This also extends to the quality of re-implemented backend code. Check how much cleaner some PyTorch C++ code of matmul looks after porting it to Nim. We hope this encourages people to contribute and fiddle with the backend more!

More than a wrapper

NimTorch uses ATen, the same native tensor library that powers PyTorch — without language or runtime glue. This allows us to have minimal maintenance cost and focus on completeness and sexyness.

Thanks to the great work of the guys at PyTorch, we are able to automatically generate most of the tricky bits from their declarations, just like they do for Python. The autograd system uses many generated definitions like this:

which is a Nim-DSL that roughly expands to the following.

This is not hard-coded. You can just as easily define your own differentiable functions.

Together NimTorch and ATen compile for basically every platforms, from WASM to game consoles.

From now on

  • Improve NimTorch and try as hard as possible to follow the big brother, PyTorch
  • Add more examples and baselines of common machine learning algorithms
  • Optimizations and benchmarks, PyTorch vs NimTorch vs Glow, etc.
  • Possibilities for backend optimizations, e.g. through Glow (https://github.com/pytorch/glow)
  • Efforts towards Python-like interactivity: https://github.com/nim-lang/Nim/issues/8927

Questions

  • ask us
  • gitter
  • Giovanni Petrantoni will be at the PyTorch conference in San Francisco next week

About us

Fragcolor started as a simple github group, mostly to gather, merge and share knowledge and goals, to finally become a start-up with many ideas and projects in the planning. Stay tuned!

Fragcolor

Written by

Fragcolor

https://github.com/fragcolor-xyz

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade