The most interesting AWS service…in the world!

Dave North
Signiant Engineering
3 min readOct 15, 2018
Mechanical Turk

Edit: As was pointed out in the comments, Turk isn’t strictly an AWS service but rather an Amazon service. As it’s in the AWS SDK as a service though, it is related to AWS. I have not edited the text of this article but offer this correction here

Anyone up on their AWS Jeopardy knows that the first AWS service was SQS. But what’s the most interesting AWS service? DynamoDB? Lex? Lambda would have to be right up there (I recall vividly the gasp from the audience at Re:Invent in 2015 when it launched). I was recently reminded of what I think is the most interesting and unique service that I’ve used though and thought I’d write about how we used it. The service is Mechanical Turk.

I was reminded about this service recently while listening to a lunch and learn from a colleague talking about all the great information he’d learnt at the AWS Summit in Toronto. One area of interest for my colleague is machine learning and in the AWS machine learning stack, there’s a small box for something called Mechanical Turk alongside the more widely known Sagemaker.

What is Mechanical Turk?

The most interesting service…in the world!

Turk is a scalable “army” of human workers that are available on-demand. The basic idea is there are 2 type of users — requesters and workers. Requesters post small units of work for…well, workers to work on. These are generally simple tasks that pay out typically pennies. A common example of a work item would be image tagging for example.

How did Signiant use it?

One of the perennial problems in software is always getting enough beta and test feedback on a new release or feature before it’s unleashed to the wild. Lots of people show interest in a beta release but few actually provide meaningful (or any!) feedback.

Some years back (circa 2012), we were working on a new feature for our SAAS product and before we generally released it, we wanted to get the feedback and real world experience from as many people as possible — around 1000 if we could do it. After looking around for a good solution for this, I came across Amazon Mechanical Turk which sounded promising. While it wasn’t typically used for the case we were looking at, it did provide access to a wide variety of workers using various end-user platforms and browsers.

In our case, we needed someone to use a web portal to upload a file and answer some basic questions:

  • What OS are you running?
  • What browser are you using?
  • What available bandwidth do you have?
  • etc.

We posted a work item (called a HIT or Human Intelligence Task in Turk speak) and waited to see if we could get anyone to bid and work on it. I asked for 10 workers for the first test with a price of 50c per HIT with the restriction that all workers had to be unique (ie. we didn’t want all work completed by the same worker). We got all 10 — 9 of them could complete the task, 1 could not. The feedback was fantastic.

Based on this small experiment we were able to scale this out and get feedback from more workers. Our HIT attracted some good quality feedback and because it paid well and was outside the usual “tag this image” task, our HIT was rated highly.

I think Mechanical Turk is generically very useful for spot testing and general usability quick tests. The only piece that thwarts more adoption is your HITs cannot have the workers install any software so that may limit some use cases. Web based tasks are fine though.

Over the years, I’ve used many fantastic AWS services (Lambda still amazes me) but Mechanical Turk is still the most interesting (and out there!) service I’ve come across.

--

--