Approachable Programming

The Future of Computer Architecture and Programming

We, the programming community, have been spoiled by Moore’s law for a long time. Most of our solutions today are based on applying brute force to solve our problems. With the end of Moore’s law, we have a chance to have a fresh look at hardware and software architectures that radically simplify programming itself and at the same time take massive advantage of newly available parallelism.

The biggest problem in front of us is that programming is just too hard. Machine Learning has shown us tantalizing new alternative: Declarative Programming. In the Machine Learning (ML) world “suddenly” we can solve problems that have until now been intractable — in a way, turning programming into a data science and an “expert” sport.

Declarative Programming is about describing the problem and letting the computer figure out how to solve it. For seventy years, humans have written millions of lines of imperative code telling the computer how to do its job. Yet, problems like language translation, face recognition and game playing remained stubbornly unsolved. Suddenly, by moving up a level, we declare the problem and the computer figures out how to translate, recognize and win. Machine Learning is one kind of Declarative Programming. In this paper, we talk about two others; one old and one new.

At O’reilly’s Artificial Intelligence conference on Sept ’18 titled “A New Golden Age for Computer Architecture”, David Patterson summarizes what the next big opportunities are in computer architecture and programming languages.

His talk description says: “Innovations like reduced instruction set computers (RISCs), superscalar, and speculation ushered in a golden age of computer architecture, in which performance doubled every 18 months. Unfortunately, this golden age has ended: microprocessor performance improved only 3% last year.

The ending of Dennard scaling and Moore’s law and the deceleration of performance gains for standard microprocessors are not problems that must be solved but facts that if accepted, offer breathtaking opportunities.”

Today instead of faster computers, software needs to take advantage of more computers. Historically utilizing parallelism in this way has been really hard.

“The good news is that our ravenous ML colleagues have shown the way.” Machine Learning systems taking advantage of GPU based parallelism run up at least 200 times as fast as their CPU based peers. As a result ML researchers at the forefront increase their computation appetite for training by 10x per year.”

The same will be true for other types of Declarative Programming. In fact, keeping Patterson’s challenge for the future clearly in mind, the key is that Declarative Systems have two unique abilities:

1- taking advantage of parallelism,

2- working across multiple instruction sets.

“High-level, domain-specific languages and architectures and freeing architects from the chains of proprietary instruction sets will usher in a new golden age. David Patterson explains why, despite the end of Moore’s law, he expects an outpouring of codesigned ML-specific chips and supercomputers that will improve even faster than Moore’s original 1965 prediction. Like the 1980s, the next decade will be exciting for computer architects in academia and industry alike.”

This is exciting to say the least.

We see many companies creating their own machine learning chips from Apple to Alibaba to Google. Companies who care about software care about hardware. However, this time there is more to it. Hardware and software teams are working together to deliver solutions throughout the industry in the machine learning area. We see the same opportunity in other areas of Declarative Development — domain specific hardware tailored to the needs of particularly broad classes of applications leading to massive performance improvements and amazing solutions to problems.

Programming is too hard. It is so hard that many problems just can’t be solved with conventional imperative and manual programming. In fact, thinking about all the problems human brains solve, the diagram below shows that actually most of these problems cannot be solved with conventional programming approaches.

What Machine Learning has demonstrated is that some of those impossible problems — like winning at Go, recognizing faces, and translating human languages, can be solved with no manual programming at all. As the diagram below shows, though, Machine Learning is best suited to a small set of problems centering around perception, object recognition, image and voice recognition. The question this paper addresses is: are there at least a few other areas that can be attacked with Declarative Development?

Impact: New Classes of Declarative Development

Coming back to David Patterson’s talk, one of the points that he has made was that the opportunity is software and hardware working together in domain specific problems to get 10X and more performance. In his presentation Patterson showed an example python-code increasing 63,000X in performance based on the paper titled “There is Plenty of Room at the Top”.

David Patterson — A New Golden Age for Computer Architecture

The first 47 times improvement comes from translating Python to C — translating it to a static compiled language. Then when one extracts parallelism, one can achieve 366X improvement. The problem with parallelism is that it is so hard to program that most programmers can’t do it. The beauty of Declarative Systems is that parallelism becomes automatic.

By optimizing parallelism and memory usage we can get to 6,727X speed up. With domain specific hardware architecture, we can achieve 63,000X improvement. Hardware and software working together is the key to the future. The hardware is the easy part; getting the software to take advantage of it is easy to say and very hard to do.

Customized hardware take many forms. Different instruction sets are the norm, but often it also makes sense to use lower precision floating point or 8–16 bit integers instead of the 32 and 64 bit numbers found in the conventional CPU world. Memory hierarchies and I/O paths can often be oddly different too, all in the name of rapid execution across large operation sets. Adapting to all of this with conventional imperative languages is an exercise in futility. That’s where Declarative Programming comes in. And, best of all ML has paved the way here.

The graph below shows the result of putting this all together. The challenge is how can we gain the same performance improvements in other domains?

If Machine learning is one form of Declarative Programming, what are the other two? Query Processors(QP) have been around since the 70’s. As we deal with petabytes of data the ability to frame the problem as a query and have the QP figure out how to get the data becomes more important every year. As with Machine Learning, the Query Processor automatically takes advantage of massive parallelism to deal with more data than we ever thought possible.

Machine Learning and Query Processor have a common characteristic: they deal with particular problem sets or domains. Machine learning is about pattern recognition and Query processor is about information retrieval. Declarative Systems then are Domain Specific. Not only does this allow software problem solving — it also paves the way for Domain Specific Hardware.

If Machine Learning and Query processors are the first two domains what is the third? That third is the Decision Engine. A Decision Engine executes sets of policies and rules. In a conventional program, programmers write huge amounts of code to decide if and when particular actions should be taken. In a Decision Engine all the decisions about when and in what order to take actions are decided by the engine. This has three large implications:

1. Eliminating 50–75% of the code in conventional applications,

2. Continuous optimization to meet business goals becomes possible,

3. Since actions / rules are fired by the engine they can be run in parallel.

Having the Decision Engine based on sets of Policies and Rules actually completes a cycle. With Machine Learning, we can recognize and perceive the world around us. These perceptions are fed to a database. The Query processor then allows us to ask questions and understand what we are perceiving. Now add in a Decision Engine, that can act on the world around it and we have a complete system.

Just as Machine Learning and Query Processor allow experts — not only programmers — to specify problems for the computer to solve, now so does the Decision Engine. Perception, Understanding and now Behavior, all three are data driven and share data with each other. Together we have a system that is not only complete but nicely integrated.

Three Declarative Engines Working Together in an Integrated System

By itself, Machine Learning is already exciting enough. The challenge is that, in fact, it addresses a very limited set of problems. At the same time, we have recently had to come to terms with the fact that individual processors are no longer getting faster. Parallelism seems exciting but normal programming simply can’t make it work. In this paper we have shown an exciting path to the future not only of hardware and software, but even more a future where the two of them really work together.

Programming has been too hard for too long. If we could only see it, ML offered us a glimpse of how to cut programing down to size. No longer will excessively technical programmers tell computers how to do their work in tedious detail. Instead, Declarative Development paves the way to a better future: describe the problem and let the computer figure out how to solve it. That’s how Machine Learning already works. That is how Query Processors have worked for fifty years now. And, as we move ahead, Decision Engines will allow us to specify the behavior of our systems in the same way.

Working together these three classes of Declarative Development engines tackle three important domains: perception, understanding and decisions. All three engines are Declarative, all three are built to take advantage or parallelism, and working together, they allow us to build many classes of complete applications. Best of all programming finally becomes approachable by more than programmers.