Teasers for new works of digital fiction

Detail from quilt by Adeline Harris Sears (1839–1931) (via)

Writing fiction is hard in any medium, but interactive fiction introduces additional mechanical constraints: the technical challenge inherent in writing software, and the narrative complexity of branching or procedural content. There’s a third, human factor, too: this is a small community and that brings an uncertain effort-to-reward ratio. An interactive fiction author may spend hundreds of hours on a game only to receive limited feedback from a tiny audience.

To gauge interest and get early recognition before the hard work begins, the community runs Introcomp, a competition for unfinished pieces. …


Reviews of the 2018 entrants

From Les fleurs animées (1867)

Last year’s Interactive Fiction Competition had the largest number of entrants ever, and with nearly 80 games and stories most people had no hope of playing them all. IF Comp’s sibling festival, Spring Thing, is less intimidating for both players and authors: it has looser guidelines, lower stakes, and more experimentation.

I played all fourteen entrants in the Main Festival (completed works, eligible for prizes) and most of the Back Garden entries (experimental or unfinished works). …


Via UCLA’s Young Research Library collection

My favorites from 2017

Every November, a hundred or so people write computer programs that attempt to generate “novels.” NaNoGenMo is a tongue-in-cheek competition that, like its inspiration, National Novel Writing Month, has no prizes or rankings. There are only two rules: your program must output at least 50,000 words, and you must publish its source code.

NaNoGenMo inspires news stories in the vein of, “Can AIs Replace Authors?” and while I appreciate the attention those pieces give to individual entries, they tend to miss the spirit of the whole endeavor.

These are tributes to human creativity, not attempts to subvert or replace it…


Some asides about asides

For this year’s annual Interactive Fiction Competition, I entered a hypertext story called Harmonia. (It came in third place, hooray.) The story, in part, pertains to the ways in which narratives are re-read and re-interpreted over time, as expressed by annotations and scholarly commentary. I’ve always been fascinated by the scribbles found in old or scanned books; I once wrote a program to deface books for me. Harmonia is a story about the dialogue that annotation represents between readers separated by years, decades, or even centuries.

Competition authors often write postmortems about their works, and because reviewers expressed interest in…


Part 3: Generation and task learning

George Arents Collection, The New York Public Library. “Automatic telephone exchange.” The New York Public Library Digital Collections.

(This is a continuation of “What artificial intelligence can do today: Prediction, classification, and regression,” and is part of a longer series on AI Literacy.)

Generation

When we talk about “training a network,” often what we’re really saying is that we’re teaching the machine about probabilities:

  • If an image is mostly blue with round blobs of white, it’s probable that it’s a picture of the sky; but if it’s mostly blue with triangular blobs of white, it suddenly becomes more probable that it’s a picture of sailboats.
  • If a sentence begins with The, it’s more probable that the next word is…

Part 2: Better data science with prediction, classification, and regression

`George Arents Collection, The New York Public Library. “TIM.” The New York Public Library Digital Collections

In the first post in this series on artificial intelligence aimed at a non-technical audience, I provided definitions for the major AI buzzwords: machine learning, neural networks, and deep learning. Given the fundamental principles of modern AI — that ML systems learn from lots of examples, and that deep learning enables richer representations of those examples — what can we reliably do with these systems today? What’s practical, what’s still emerging from research, and what remains unsolved?

AI capabilities can be assessed along two broad axes:

Better data science

Tasks that we’ve long been able to do with traditional statistics or software engineering…


Simple answers to common questions about AI and machine learning (part 1 of a series)

George Arents Collection, The New York Public Library. “The machine compared with a human brain.” The New York Public Library Digital Collections.

Though I’ve been following machine learning for a long time, only recently have I tried to become a practitioner. Last year I threw myself into learning the fundamentals of natural language processing, and wrote a five-part series on NLP aimed at other programmers. This year I’m tackling machine learning more broadly, with a focus on text comprehension and production.

Friends and colleagues are naturally curious about artificial intelligence, and tend to ask me the same reasonable questions. I’ll answer them to the best of my ability for you. While the answers won’t be too technical, I will try to make…


Adapted from Playful Generative Art: Computer-Mediated Creativity and Ephemeral Expressions, presented at Simon Fraser University in February 2017

Detail from “I Saw the Figure 5 in Gold” by Charles Demuth via The Metropolitan Museum of Art, licensed under CC0 1.0

Any technology as powerful as artificial intelligence provokes ethical questions. Does AI make killing by remote control too consequence-free? Do AI models systematize existing biases? Are we coding ourselves out of jobs?

Lately I’ve been thinking about ethical guidelines that should apply to generative media: narratives or images that are created mainly or entirely by computers rather than people. I’ve drawn much of my thinking from the Twitter bot community; these artists sit at the intersection of “stuff made by computers” and…


2016 edition

When I first started as an independent ebook consultant, I went to big publishing conferences like Book Expo America to try to drum up work. I had no clients and no contacts, which meant I had a lot of time to wander around feeling like a failure and consoling myself with free books from the expo hall. I’d get too many to carry home so I’d ship them, at a fairly absurd expense given that most of them weren’t very good.

But there were lots of great finds too. Some I probably would’ve read eventually, like Anathem, The Magicians, and…


Applying order to the procedural universe

from Our Young Folks (1865)

NLP is deeply concerned with language models: systems that attempt to represent human language through statistical relationships and/or formal logical grammars. Language models usually don’t attempt to represent semantics or otherwise relate language to the real world. Programmers can get pretty far treating language as an abstract logic puzzle, but if you spend any time doing procedural text generation or comprehension, the limitations become obvious.

One way to address this is to layer on a world model: an additional representation of what words mean.

Liza Daly

Engineering Director at @TheDemocrats. I try to make nice things with games, books, and bots. https://lizadaly.com/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store