Bot capitalism will fail
There’s been a lot of hype surrounding chat bots lately. I love bots, and usually I’m all for getting excited about the things I love, but I think the recent hype is very misguided. The reason is that the people contributing to the current hype bubble surrounding bots are not natural persons but corporate persons: they are excited about bots as products. Usually, when the spectre of money enters into a domain previously commercially nonviable, a lot of people get super excited about making money and miss the point spectacularly (pun kind of intended), and this case is no different. However, it’s worth talking about this specific case nevertheless, because with bots, the commercial focus of yuppies and suits is fairly likely to ultimately convince everyone outside the core art-bot community that bots are something worse than useless: that they are supremely uninteresting.
The first thing I’m going to discuss is conversational interfaces. The reason is that the sudden increase in interest in bots is related to the fact that several companies have been shipping speech-based conversational interfaces, and a lot of current commercial bots are intended to be the equivalent of these speech-based interfaces run over existing text-based communications protocols.
I am largely unimpressed with conversational interfaces. They have a long history; many early and influential AI systems would be classified as conversational interfaces, and very simple conversational interfaces were a staple of books on learning to program for the BASIC set since the introduction of the first 8-bit home computers.
Ultimately, conversational interfaces fall into two categories: interfaces that try to keep up the illusion of intelligence and personality by ignoring most input and searching for particular keywords, and interfaces that are effectively special-purpose command lines with snarky or otherwise unprofessional error messages. The former type is epitomized by the search engine “Ask Jeeves”, which achieved its “flexibility” by implicitly inserting an OR operation between each term in the input (thus causing the term with the greatest TFIDF score to rise to the top of the results and become a de-facto keyword), although other examples include Alice and Eliza. The latter type is epitomized by the adventure game “Zork”, which had a somewhat english-like and very limited programming language it could understand and would mock you if you attempted to perform an invalid operation. Siri, Cortana, Echo/Alexa, and Google Now are all combinations of these two forms.
The thing about conversational interfaces of the former type is that they are inflexible and difficult to predict. A given input will produce some output, based on pattern-matching, but in order to prevent seeming as though the machine is not intelligent, the machine will be inclined to always respond with something — ideally, something somewhat randomized. A human being, without access to the source code, will eventually build up a folk-model of what patterns do or do not produce the desired result, but such a model has no guarantee of accuracy or completeness. A user may have some operation they desire that is built-in, but the complete list of available functions (though it is necessarily small, since each one has to be hand-written by a human being and given rules for invocation that don’t conflict with other functions) will never be distributed to prospective users because that would ruin the “magic” of a somewhat human-like interface. The ideal end-game for such an interface is for a user to memorize a handful of commonly used patterns in their most consise form and otherwise use it for its novelty value as a conversational partner — a world of people barking “MOVIE SHOWINGS BROOKLYN DEADPOOL” at their phones instead of typing the same query into google.
The thing about conversational interfaces of the latter type is that, by being english-like, the language understood by the interface will never be sufficiently minimal for a non-casual user, and by being ‘entertaining’, the error messages will never be sufficiently specific for a casual user to be able to trivially determine what he or she is doing wrong. The ideal end-game for such an interface is for a user to memorize a needlessly verbose and limited programming language and be able to type “FEED TROLL TO TROLL” with the expectation that doing so will cause the troll to eat himself and be defeated.
Taken to infinity, the ideal form of the first type is a search engine. Taken to infinity, the ideal form of the second type is a unix shell.
Commercial bots, to the extent that they are expected to reliably perform potentially dangerous operations like making purchases, editing calendar entries, and controlling home automation systems, are going to remain quite close to the second form. This is a shame, because there is absolutely nothing revolutionary about a shitty command line, and it doesn’t do justice to bots in general to imply that all of them are like that.
Consider the non-commercial bot: the art-bot. The art bot is varied in its form. The art bot, because it is a bot, is able to tirelessly perform intellectual tasks. The art bot, because it is art, focuses on tasks relating to recontextualizing ideas, words, images, and perceptions. The art bot is a meaning factory, producing brand new thoughts out of the interference pattern between a PRNG and an audience.
The art bot doesn’t buy anything, or if it does, the fact that you can’t reliably tell it what to buy is part of the point.
The art bot can write music, or poetry, or paint pretty pictures. If you don’t like the art that the art bot produces, too bad. The art bot doesn’t care. The art bot will produce a thousand other pieces while you are deciding whether or not you like that one.
Some art bots tell you to do things. Do you want to take commands from a robot? Maybe. Sometimes the things it tells you to do can’t be done.
Some art bots are funny. Some are even intentionally funny. Any image macro, snow clone, or formula joke is the potential domain of an art bot, who will scour a dictionary to produce millions of variations.
An art bot is not a shitty command line. Or, if it is, it’s intentionally shitty. An art bot might mock you for everything you tell it to do. Or, it might systematically break down your hopes and dreams. Or support them.
Freed from the shackles of needing to please a core user base and be immediately useful without ever screwing up, art bots are allowed to be interesting.