We’ve Got Love For HPE

AngelHack
AngelHack
Published in
4 min readAug 14, 2017

It’s hard to believe that this series has come to a close and just two a few weekends ago we brought our 10th Global Hackathon Series back to where it all starts, Silicon Valley! This was one of our biggest events of this year, and of course with an event this size, we needed amazing sponsors to bring this event to life.

Without a doubt, when we think of all the amazing sponsors on the list, we immediately thought of the team at Hewlett Packard Enterprise (HPE). Now these guys have been busy these past years working on systems such as Memory-Driven Computing (MDC) and here at AngelHack we felt that we should let you guys know a little more about them.

So you may be scratching your head thinking, what is MDC and why is it important to me (Don’t worry, so was I!). Basically, MDC is HPE’s long-term vision connecting with their “Everything Computes” narrative, and we can think of it as a infrastructure that will be able to perform Machine Learning models and algorithms.

The team at HPE has been hard at work since 2016 getting what they call, The Machine, up and moving. The Machine is an ongoing “research project” that provides a framework to continue to talk about next steps and future proof points for their narrative.

Now you may be wondering, how can this as a developer, designer or an entrepreneur affect you? Well, as we mentioned briefly before, MDC can impact the field of Machine Learning models in various positive ways. Below, we are going to list the four areas that MDC can affect ML.

Efficiency

We can perform different tasks on a data set simultaneously without copying it — perform analytics on the “now” as opposed to traditional approaches where the data from a transactional system must be copied to another place or even system for analysis. That approach means insight from data that is, by definition, out of date. An example of this is performing complex analytics on a live data set, where new data is streaming in all the time. Imagine live fraud detection on, say on-line auctions even during the milliseconds before the auction closes.

Or imagine a multi-layer system, all built on a common foundation, with each layer being more strategic than the last:

A single piece of manufacturing equipment, say a silicon etch machine, optimizing its own operation

A manufacturing execution system for a chip fab, increasing factory output by learning optimum routing and predictive maintenance schedules

A whole-enterprise supply chain, vertically integrating all manufacturing layers

A customer viewpoint, optimizing supply chain operations against individualized customer predictions.

Flexibility

One increasingly common way to speed training and inference is by custom chips optimized for one task — say machine vision. This is great for unchanging problems. The algorithm is fast and efficient because it’s literally set in stone. Through task-specific processing and an infinitely flexible architecture, MDC enables the same hardware to approach the speed of custom silicon but be reconfigurable to perform many different tasks by altering the balance of compute elements and memory.

The architecture also allows the point where analytics occurs to be moved between core and edge to achieve the right balance.

Accelerators might not be Von Neumann: DPE, DSP, etc.

Ideal system for experimenting — plasticity, ask it anything!

Assemble flexible systems for use in places where you can’t physically reconfigure to take on new tasks — e.g. spacecraft

Previously intractable problems

Sometimes algorithms exist to solve a challenge, but aren’t workable because the technology is lacking. Once some threshold is passed, the impractical become practical and suddenly there’s no point in doing it the old way. For example, Monte Carlo simulations can be sped up several orders of magnitude using an in-memory approach, but it you need 100TB of fast memory for this technique to be usable on real financial instruments. We’ve demonstrated that improvement using MDC.

Or imagine doing deep learning training in an edge device. Today, that’s close to impossible to do accurately because the training data is too big to hold at the edge and the computing power is lacking. An MDC intelligent edge device with abundant non-volatile memory and neuromorphic accelerator could make that a reality.

Another example is our “infinite what-if engine”, which uses abundant memory to optimize the responses of complex systems to external stimuli — say a volcano disrupting the air transport network — by constantly constructing, refining and pruning a vast tree of pre-computed action plans, “learning next to the brake pedal”.

Scalable performance

Certain deep learning algorithms — topologies — stress communication between weights. This leads to poor time-scaling, i.e. training times do not decrease significantly as scale-out compute nodes are added. Using the ultra-fast communications fabric of MDC we can break that bottleneck. This points the way towards truly real-time model training and retraining and blurs the line between training and inference.

Well, we know that was a lot of information to throw at you in one sitting, but we also know that it probably got your creative juices flowing about all the ways you can take advantage of this technology in the future.

We want to thank HPE and all of attendees for making #AH10 Silicon Valley an amazing event! We can’t wait to see you all again next year!

--

--

AngelHack
AngelHack

AngelHack ignites the passion of the world's most vibrant community of code creators + change makers to invent the new and make change happen, together.