The original page from Le Corbusier’s 1950 book The Modulor, and the Twitter app that does the same exercise

It took Twitter four hours to shut down my modernist architecture bot. Why don’t they shut down the anti-semites?

Tom Whitwell
3 min readJan 12, 2017

Over a couple of evenings this week I wrote a simple Twitter bot. It was based on page 93 of Le Corbusier’s book The Modulor. The page shows what he calls a panel exercise; take a square, and divide it into smaller shapes according to various rules, to see what looks aesthetically pleasing.

My script automates the panel exercise, creating 48 differently divided squares each time it runs.

I turned it into a bot that replies to anyone mentioning ‘Corbusier’ on Twitter (there are a few every hour) with a freshly-generated image and a plausible Corbusier quote generated by a Markov chain.

I turned it on at about 6pm on Wednesday evening.

It got a few likes and follows. People seemed to enjoy playing with it.

It had a brief conversation with a professor of urban planning about the Bauhaus and Indian urban planning in the 1950s.

Then at around 10pm it stopped working after sending 49 messages.

I was confused, but by reading the error message quickly learned that Twitter’s systems, had noticed it infringed this line in their Automation Rules and Best Practices: “Sending automated replies based on keyword searches is not permitted.”

With hindsight, this rule makes complete sense; a Twitter roamed by countless reply-bots would quickly become unbearable.

Twitter were responsive and helpful — I got an automated reply very quickly that made sense. I’ve changed how the bot works, so it will only tweet out and reply to @ messages. Hopefully they’ll restore my API sometime.

Obviously my freedom to speak to random modernist architecture enthusiasts has been infringed, but that’s understandable.

So, Twitter have reasonably efficient and effective systems for dealing with straightforward infringements of their Terms of Service.

Not all infringements are equal

Here is another section from The Twitter Rules:

Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.

And here is David Duke’s twitter account. He joined in September 2009.

Why does this happen?

Any business has to look after customers. Twitter’s customers are advertisers; they’re they people paying for the service, so it’s important that the advertising system works for them.

Automated replies based on keyword searches subvert the advertising system.

If I was a media agency promoting a Le Corbusier exhibition, @corbusierbot might be a nice way to engage interested people. With a few lines of Python, I could bypass Twitter’s ad system and cost them revenue.

However, David Duke’s hate stream is popular, with 27k followers (don’t go and look at those accounts), so eyeballs can be sold to advertisers like… Disney.

(This is a screen grab that I took while researching this article. Ads on specific streams are very inconsistent — obviously Disney are targeting me as an individual user, not David Duke as a content creator — so you’ll see different ads, or often no ads at all.)

Hopefully, one day Twitter will find a way to block hate speech as effectively as they block modernist architecture bots.

--

--

Tom Whitwell

Consultant at Magnetic (formerly Fluxx), reformed journalist, hardware designer.