Lemmings partners with Amazon Web Services and Google Cloud Platform

Thomas Schranz
Dec 10, 2017 · 4 min read
Artemis & Kira at sektor5 co-working spaces during the first Lemmings batch.

A few days ago Google’s AI project AlphaGo Zero beat the world’s leading chess engine (Stockfish) after just four hours of training on Google’s Tensor Processing Units (TPUs). Just one of the many examples of how a generic machine learning approach can within hours yield better performance than specialized software that got fine-tuned over many years.

Yet most fascinating to me is that the technology behind extraordinary achievements like these is not behind a walled garden. It is readily available to all of us.

Most of the leading machine learning tools are open source and well documented. The data needed for machine learning is more accessible than ever before and even the hardware used is becoming ever more cost effective. We all can play around with the same ingredients that big players like Google, Amazon, Facebook and Apple do.

From “mobile first” to “artificial intelligence first”

For early stage teams this is amazing. If you have an idea, playing around with it and getting some results is fast and there are few barriers. Machine learning is an open field and there are countless problem domains to apply it to in addition to chess.

While the recent technological shift was largely about making services more accessible by making sure they are also available on mobile (“mobile first”) we are now in a shift focused on making more sense of existing data by using machine learning (“artificial intelligence first”).

Having access to machine learning tools as well as to data and hardware is great but for very early stage teams (pre-product, pre-market, pre-funding …) there is still a huge barrier: infrastructure cost.

While it only takes a few hours to run an experiment similar to AlphaGo Zero’s chess training it can be prohibitively expensive for an early stage team to do so unless you have a few thousand dollars to burn.

This puts early stage teams at a disadvantage.

What if a designer would not have to pay for paint

Imagine an ambitious painter. She is pouring all her heart into filling one canvas after the other. Yet every canvas, every brush and every pot of paint is incredibly expensive. It puts a lid on what she can express. Every mistake is costly. Every idea gets scrutinized. Every stroke is restrained.

What could the painter do if canvases, brushes, paint and storage would not only be readily available but free? What if her studio would automagically scale with her efforts? What could she do?

Lemmings visiting “Hello, Robot.” at the MAK during the third batch.

We were asking ourselves what could Lemmings do if their imagination would not be artificially constrained by infrastructure cost. If they would not have to restrain their thoughts to the GPUs in their laptops? What if they would not have to worry about purchasing hardware that becomes obsolete within weeks?

Amazon Web Services and Google Cloud Platform

I’m glad that we are not the only ones who are curious to see what ambitious teams can do with machine learning on modern infrastructure.

We are partnering with Amazon and Google to provide every Lemmings team with USD 100.000 for Amazon Web Services as well as with USD 100.000 for the Google Cloud Platform.

On top of this all of our teams get access to first class 24 / 7 support
as well as access to training directly provided by each platform partner.

This not only means massive infrastructure and tech support
cost reductions for our early stage teams.

It means unleashing the mind.

Further reading

https://aws.amazon.com/blogs/aws/new-amazon-ec2-instances-with-up-to-8-nvidia-tesla-v100-gpus-p3/

https://aws.amazon.com/blogs/ai/new-aws-deep-learning-amis-for-machine-learning-practitioners/

https://chess.stackexchange.com/questions/19366/hardware-used-in-alphazero-vs-stockfish-match

https://news.ycombinator.com/item?id=15556789

https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/alphago-zero-goes-from-blank-slate-to-grandmaster-in-three-dayswithout-any-help-at-all

https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu

https://news.ycombinator.com/item?id=15869083

https://cloudplatform.googleblog.com/2017/09/introducing-faster-GPUs-for-Google-Compute-Engine.html

https://cloudplatform.googleblog.com/2017/11/new-lower-prices-for-GPUs-and-preemptible-Local-SSDs.html

https://www.nvidia.com/en-us/data-center/dgx-1/

http://images.nvidia.com/content/technologies/volta/NVIDIA-Volta-GPU-Architecture_The-Future-of-AI.pdf

https://en.wikipedia.org/wiki/Tensor_processing_unit

Lemmings

Incubator focused on Art and Artificial Intelligence

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store