Can a neural network learn the rules to Conway’s Game of Life? Why, and how?

It’s pretty clear now that deep learning is not, by itself, the answer to artificial general intelligence (AGI). Neural nets often work well but only in the domain they were trained in, without true understanding or abstraction. Reinforcement learning has turned out to be unstable, data hungry, and basically a brute force method that doesn’t apply to real world tasks.

As an old-school AI practitioner, I think we need to remember where we came from, so let me take you back to a time when people were a little bit crazy, gradient descent wasn’t the only optimization method, and we had to code simulated annealing by hand in C, or Matlab if we were lucky. …


This time, it gets even more intriguing: using attention to predict stock prices, it worked, and there were some weird discoveries along the way.

Back in January, I gave over control of my Robinhood stock trading account to an AI that I’d made. Many people read, and responded to, the article I wrote about it — offering expertise, pointing out the idiocy involved, pondering the philosophical implications of it all. It even led to me being interviewed on a Christian podcast.

The gist was that as someone who builds AI and knows little about trading, I wanted to make an AI to do the hard work. The context was that I was competing against my wife — and (probably) more importantly, trying to bootstrap our meager retirement fund from a couple weeks’ SF rent into a Cancun beach villa. …


… and how I built the AI.

Robinhood, if you don’t know it, is a commission-free stock trading platform that Bay Area techies like me love. I can buy and sell shares very conveniently from my phone. I’m fairly new to it, and started trading against my wife a few months ago as a game (we are quite competitive).

This is FOX’s Masked Singer (image from The Wrap). Very little about it is relevant to an article like this, but it’s a fun show with amazing costumes and competition.

Despite thinking I know everything there is to know about hardware and software companies (“intuitively”), I have found that actively trading shares is quite difficult, and have been consistently losing week-on-week to my non-techie wife. First, you have to find the time to be on top of the news about your portfolio companies, their industry segments, and market fundamentals. Second, having friends at those companies would probably help. …


If an ML model makes a prediction in Jupyter, is anyone around to hear it?

Probably not. Deploying models is the key to making them useful.

It’s not only if you’re building product, in which case deployment is a necessity — it also applies if you’re generating reports for management. Ten years ago it was unthinkable that execs wouldn’t question assumptions and plug their own numbers into an Excel sheet to see what changed. Today, a PDF of impenetrable matplotlib figures might impress junior VPs, but could well fuel ML skepticism in the eyes of experienced C-suite execs.

Don’t help bring about the end of the AI hype cycle! …


A little while ago I was teaching a Berkeley class on data analytics, and one of the exercises had students go through the Python stdlib to find interesting modules. I went through the docs too and was delighted to find that Python has a turtle! Do you remember?

FORWARD 10
LEFT 90

My high school in the 90s was not rich enough to have a robotic turtle with a pen built in (tiny violins start playing) but I remember being entranced by the on-screen movements of the magical creature, which I think at the time was running on an Acorn Archimedes, an early product from today’s chip giant ARM. …


Technical analysis lies somewhere on the scale of wishful thinking to crazy complex math. If there’s a real trend in the numbers, irrespective of the fundamentals of a particular stock, then given a sufficient function approximator (… like a deep neural network) reinforcement learning should be able to figure it out.

Here’s a fun and maybe profitable project in which I try to do that. I’ve been working with RL for less than six months and still figuring things out, but after making AI learners for a few basic games, time-series stock market data was at top of mind.

It wasn’t easy and, instead of writing an article presenting myself as an expert with a solved problem, this article documents my process and blunders I made along the way. There were many. …


It took me a long time to get started with my DeepLens: it’s a confusing, buggy piece of kit. You’re supposed to use it with various AWS services called things like GreenGrass and SageMaker, but as a hacker I needed to figure it out at a low level by SSHing in, poking around the hardware, and generally owning the device first before touching the enterprise-grade services. I finish up this article by running realtime image classification on the device with Tensorflow.

Let me make clear that my DeepLens was free from last year’s AWS conference. It’s overpriced at $250, and underspec’d, and I think there are better options if you haven’t yet bought one (i.e. …


You never know how a combination of events is going to affect you.

For me, bingeing an old episode of Brooklyn-99, combined with Tensorflow’s recent announcement that they’ve officially incorporated the Edward probabalistic programming library, set me to thinking about Bayesian probability for the first time in a while.

In the episode, Captain Holt and his husband are arguing about the Monty Hall problem.

One of TV’s best characters, Captain Holt in Fox’s (now NBC’s!) Brooklyn 99. A rare show of emotion for AI here.

I’m not familiar with the old gameshow, but the formulation of the problem is something like this:

There are three doors, behind one is a car, and behind the other two are goats. Pick one. Once you have picked, Monty, the gameshow host, will open one of the two doors you didn’t pick — one that definitely does not have the car behind it. …


It’d be really interesting to model the success and failure over time of US zip codes. I’ve been thinking about how to use multi-agent simulation to model how population groups evolve over time.

This weekend, instead of doing that, I decided to make something pretty instead: if each zip code could be ‘alive’ or ‘dead’, how would the US evolve?. There are about 33,000 zip codes in the United States, my majestic (and really, ridiculously large) adoptive home. That makes an interesting grid for Conway’s Game of Life.

Here’s how the grid looks:

Hawaii, Alaska and Puerto Rico are indeed simulated, but hard to screen capture. Sorry!

And here’s a bit of how it evolves when it’s…


Generative AI is magical. Faced with a large, personalized text dataset, the average data-inclined engineer has but one choice: train a neural network on it, to predict new text.

Names redacted to protect the innocent people whose jobs are about to be automated by AI

The idea behind the network is simple: given a character and some historical context, make a prediction about what the next character should be. So by priming the network with a character, we should be able to make a prediction about the next character — then feed that back as input and predict the next character, and so on for an arbitrary length of generated text.

In training the network, we try to maximize the accuracy of the prediction (minimize the error) across our sample set. Take the last sentence as an example: if the context was ‘tra’ and the network’s input was ‘i’, you’d want to predict an ‘n’ for the next character. …

About

Tom Grek

Engineer in the NLP/AI space. Maker, hacker, MS EE. AGI enthusiast. Traveler and nature lover. Opinions are my own.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store