As we’re starting to build a course for learning practical AI, I’m seeing a conceptual path from thinking about software as something that runs an explicit set of steps, to something that tries a bunch of stuff, to understanding the mechanics of more sophisticated machine learning.
I’m finding myself drawn into generative systems — they seem to be a good stepping stone.
Jeff Bezos explains Machine Learning as:
Over the past decades computers have broadly automated tasks that programmers could describe with clear rules and algorithms. Modern machine learning techniques now allow us to do the same for tasks where describing the precise rules is much harder.
Have you ever used the Goal Seek function in Excel? You create a spreadsheet with a bunch of interconnected formulae, then can ask Excel to figure out what the input values need to be to hit a certain output value. For example, if you have a spreadsheet that calculates the budget for an event, with certain costs like extra rooms being dependent on the number of tickets sold, you could ask Goal Seek to tell you how many tickets do you need to sell to make $10,000 in profit.
That fits the goal-directed part of Daniel’s definition of intelligence, though it’s not very adaptive. But from there, we can look at generative systems.
Jon Bruner at O’Reilly writes:
Alastair Dant, Riffyn’s lead interactive developer, showed me a tongue-in-cheek experiment he cooked up that searches for the best coffee, as measured in lines of code written by his engineers. Riffyn presents its user with a flow chart that reads left to right; coffee beans and engineers go in, and lines of code come out. Each step in the process has its own node — grind coffee, brew coffee, serve coffee — and its own inputs and outputs that make up a “genealogy” linking the steps together. The experimenter switches out the variety of beans and sees whether engineer output changes.
That’s all normal science: isolate individual variables that might have an effect on some measurement, and test variations one by one.
Now, it’s not just Excel tweaking a few cells up and down to see if it’s getting closer to the answer you want, but a system that tries lots and lots of little variations.
This kind of genetic algorithm was used by NASA in 2006 to design a specialized antenna about the size of a paperclip. The challenge was how to optimize its shape for a set of different operating parameters — what’s the perfect way to bend a paperclip antenna so it can help 3 satellites on a unique trajectory, operating with a particularly wide beam width, talk to base stations on earth? Well, let’s make a program that can try different shapes and simulate those conditions. It’s not just about trying permutations but creating them. The program created mutations, simulated them, kept the better ones and kept mutating. This is now called an evolved antenna.
This adds adaptation to our idea of a computer program, and from there we can step into concepts like neural networks, which can be thought of as millions of small programs that interact and update themselves.
Is this way of explaining AI helpful to you? AI experts — is it accurate?