Source: Visor

Gaming is a huge industry, and will be a huge market for AI

Derrick Harris
ARCHITECHT
Published in
5 min readAug 16, 2018

--

This is a reprint (more or less) of the ARCHITECHT newsletter from Aug. 14, 2018. Sign up here to get new issues delivered to your inbox.

Talk about stating the obvious, right? But I do think video games get overlooked as a lucrative use case for artificial intelligence, despite their common usage as training and proving grounds for deep reinforcement learning techniques.

Mostly, gaming is overlooked (IMHO) because it’s not really a great example of the world-changing impacts people like to talk about when they talk about AI. Not only do claims like, “You’re gonna see sick graphics and adaptive bots,” not have the same ring as, “AI is going to add trillions to the world economy,” they actually seem in conflict with such grandiose claims. Anything that keeps people playing for longer is great for game developers, but seemingly less great for productivity as a whole.

But nonetheless, gaming is only getting bigger — a fact of which I’m reminded every time I drive past the Luxor in Las Vegas, which, last time I looked, had an ad for its new e-sports arena taking up the entire strip-facing side of the pyramid. And any business this big (every report I’ve come across puts it at north of $100 billion a year and growing) is going to be a magnet for AI developers, regardless the value in which serious people hold it.

Today was a great reminder of this, because I read that:

  • Nvidia claims its new Quadro processor , which comes complete with tensor cores for deep learning, will significantly improve the graphics capabilities of gaming developers and, presumably, gaming systems. According to a press release quote by CEO Jensen Huang:

“At some point you can use AI or some heuristics to figure out what are the missing dots and how should we fill it all in, and it allows us to complete the frame a lot faster than we otherwise could,” Huang said, describing the new deep learning-powered technology stack that enables developers to integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks.

“Nothing is more powerful than using deep learning to do that.”

  • Accel led a $4.7 million round in a startup, Visor, that uses deep learning to give gamers real-time advice as they’re learning to play complex, high-speed games like Fortnite. I am admittedly getting old and mostly keep my gaming to Breath of the Wild, Mario Kart and Intelligent Qube (yes, the original), but can see the appeal of what Visor is doing every time I try new games (even non-online, non-competitive ones) and am immediately overwhelmed by everything going on and everything I need to keep track of.
  • Most video game AI systems still don’t use AI, at least not by the current neural-network-centric definition of the term. This seems almost guaranteed to change as the next generation of gaming systems has AI-capable chips embedded in the consoles themselves. There’s definitely a balancing act to find the right level of AI so that it’s challenging and realistic, but not impossible (you don’t want to play against AlphaGo, because you will lose). You also can’t pre-train a game on a specific player’s patterns, because, well, that data doesn’t exist until someone plays. But soon enough, that’s going to happen and games will be even more compelling because of it. (Although, don’t get me wrong, I loved besting CPU players in NES games with the same exact move every time as much as the next guy.)

So, yeah, gaming might not be your thing, but if AI can juice industry revenue — and it can — then we should expect investment to ramp up pretty significantly. Better yet for gaming developers: The consequences of getting it wrong or relying on black-box models are not life-or-death, which means a lot more freedom to experiment, roll out new models and choose whatever approach works best.

AI and machine learning

Finding the Goldilocks zone for applied AI

There’s some good advice in here, all based around finding the right time, action and performance minimums for an application. I think the part about prediction horizons is underappreciated both technologically (can you provide the model with enough data to learn) but also practically (how do you act on a prediction that something will happen in 20 years?).

techcrunch.com

Cybersecurity startup Exabeam raises $50 million Series D

I’m reminded of the post I linked to yesterday about the the danger’s of relying too much on AI for cybersecurity. Exabeam’s claimed success against Splunk (it operates in the SIEM space) shouldn’t be too surprising, as Splunk wasn’t created with security as a focus.

zdnet.com

3 promising areas for AI skills development

There are some good insights in here, although I suspect some of them are unique to O’Reilly’s audience. It’s going to be very important, but I’m not sure “explainability and transparency” would top the list of model-building concerns for most people right now.

oreilly.com

Intel and Philips use Xeon chips to speed up AI medical scan analysis

When you think about Xeon for AI inferencing, think about this partial quote from the article: “Our customers can use their existing hardware to its maximum potential.” We’re still a long way from GPUs and/or specialized AI processors inside every device.

venturebeat.com

JAIC: Pentagon debuts artificial intelligence hub

This analysis of the Pentagon’s AI plan, from the Bulletin of the Atomic Scientists, is quite thorough and well-reasoned. One of the bigger hurdles mentioned is the seeming gap between Silicon Valley (where the tech/talent is) and the military (where the tech is needed). Something tells me they’ll be able to find common ground on at least some meaningful issues.

thebulletin.org

Cloud and infrastructure

Performing VM mass migrations to Google Cloud with Velostrata

Google announced this acquisition back in May, but details have been few since then. Here’s a more detailed explanation of how it all works.

google.com

Q&A Scalyr’s Steve Newman: Faster log queries, scalable app monitoring

Come for the discussion of log management, but stay for the earlier discussion of Newman’s time at Wordly, which became Google Docs.

thenewstack.io

Why use an FPGA instead of a CPU or GPU?

Seems like a level-headed approach to this question. If the application can truly benefit from FPGA latency and possible efficiency, then the engineering effort will be worth it. Even cloud providers use them for pretty specific workloads, from what I understand.

esciencecenter.nl

--

--

Derrick Harris
ARCHITECHT

Hi :) Find me on Twitter to see what I’m up to now.