By Lauren Salig
Since the inception of the internet, online crowdsourcing has produced large libraries of free stock photos, tons of scientific data and mostly accurate Wikipedia articles. Apps such as Uber and Airbnb, which are both premised on the concept of crowdsourcing, have captured public attention in the last few years and taken the technology world by storm. More recently, the practice of outsourcing tasks to large swaths of people has seeped into an unlikely realm: art.
In NETS 213, Crowdsourcing and Human Computation, four Penn undergraduate students unleashed their creativity, electronically generating art while exploring the dynamics of crowdsourcing. For their final class project, the students asked each of their crowdsourced workers to complete a miniature drawing task that, only when compiled with the other workers’ drawings, created pieces of art. They recruited workers from their class and from an online crowdsourcing platform, allowing them to compare how different factors could impact the outcome of a crowdsourced project.
“We created a system called ‘Crowd Art,’ which effectively enables thousands of individual, anonymous workers — the ‘crowd’ — to generate a collective piece of art. The art is created entirely by nobody, but at the same time by everybody,” says Max Koffler, one of the students involved.
The student creators of “Crowd Art” — Russell Charnoff, Max Koffler, Palmer Paul and William Schwalbe — took the crowdsourcing class as an elective for their Computer Science majors and minors. Charnoff, Koffler and Schwalbe are enrolled in the School of Arts and Sciences as Economics and Computer Science, Cognitive Science, and Economics majors, respectively. Paul is in the School of Engineering and Applied Science studying Computer Science.
The class was taught by Chris Callison-Burch, associate professor in Computer and Information Science, and is part of the Networked and Social Systems Engineering (NETS) program, an undergraduate degree program that combines interests in technology, sociology and economics.
Callison-Burch’s class culminates in a final project in which students get their hands dirty crowdsourcing data from Amazon’s Mechanical Turk, a site that allows researchers to upload simple tasks that workers can complete on their personal computers for monetary compensation. Callison-Burch employs Mechanical Turk in his own research, using the platform to crowdsource translations for less well-documented languages, helping machine learning algorithms understand them, and studying the site itself by investigating workers’ wages.
“Ten years ago, crowdsourcing flipped the way that I approached research on its head,” Callison-Burch says. “I used to start with existing data sets and used machine learning to make progress on the problems defined by those data sets. Now I start by asking, ‘What problems are important to solve, and what data do we need to get started on them?’. Crowdsourcing provides me with the tools that I need to approach a huge range of data science problems.”
The student team decided to ask the masses to create artwork depicting a galloping horse, a subject that Schwalbe says they chose because, “due to the increasing popularity of the viral song ‘Old Town Road,’ we all had horses on our mind.”
Given a reference image of a horse, the workers rendered the horse in three distinct styles: a Pointillism style in color that portrayed an impression of the horse and its background, a Mosaic style outlining distinct sections of the horse, and a HeatMap style consisting of small dots in the shape of the horse.
Participants’ canvases were blank, a fragment of the reference image, or the entire reference image for the Pointillism, Mosaic and HeatMap tasks, respectively. The Pointillism task asked workers to build off past crowd contributions, iteratively adding more colored dots to the canvas to mimic the colors of the reference photo. The Mosaic task required the crowd members to outline the part of the horse shown in their fragment of the image, and the HeatMap task asked participants to simply click on the horse’s body ten or more times.
After processing their data, the students were pleasantly surprised with their compiled results. Each art piece was based on the same reference photo but, after being interpreted by dozens of people with unique perspectives, turned out in unpredictable ways.
“We were able to recreate the image of the horse in three different artistic styles and the results were breathtaking. Additionally, due to the nature of how our tasks were completed, we were able to thoroughly analyze two different incentive schemes in creating crowd art,” says Charnoff.
That is, the team used two different methods to create crowdsourced images: in one, their artists were recruited from Amazon’s Mechanical Turk and, in the other, their participants were their classmates.
The art created only by the group’s classmates, who had previous knowledge about the project, came out clearer. For instance, the HeatMap completed by the classmates created a fuller image of the horse and had fewer dots lying outside of the horse’s body.
At first, the team hypothesized that the Mechanical Turk workers, who use the platform as a source of income, weren’t spending as much time on the task as their classmates, who were required to spend a certain amount of time completing tasks for their course. But a quick analysis revealed that Mechanical Turk workers actually spent longer on the HeatMap task on average, guiding the students to an alternative explanation:
“We theorize that our classmates performing ‘better’ in general is likely due to the fact that they knew the end goal of our project when completing the tasks,” says Schwalbe. “This allowed them to direct their efforts towards a more helpful completion of the task rather than an authentic one. In the future, we would be interested in testing how giving different explanations for crowdsourced tasks can impact performance.”
As crowdsourcing becomes an integral part of professional and personal endeavors, it’s important to explore how factors like the phrasing of instructions or the motivation of workers could affect results.
“Crowdsourcing means that suddenly you can design algorithms with humans in the loop,” Callison-Burch says. “Instead of just returning a deterministic value, your algorithms can have a creative response. One of the first things we go over in class is sorting; a computer is going to be faster at sorting a table of numerical values, but if you want to sort kittens by cuteness, you’re probably better off starting by asking a person.”
Crowdsourced data on subjective qualities, like cuteness, are already informing machine-learning approaches, taking humans back out of the loop for such tasks. But giving the next generation of crowdsourcers a big-picture understanding of that interplay is important for more than just maximizing problem-solving efficiency.
“One of the unexpected things about this class,” Callison-Burch says, “is that it forces us to think about the economics, labor laws, ethics and other social issues surrounding crowdsourcing. We always have to remember that there are human beings working on these problems.”
Through this hands-on experience, the students were able to harness the power of crowdsourcing to create intriguing pieces of art that expand the typical understanding of what crowdsourcing can do.
“It was humbling to see a large, diverse mass of people generate a collective piece of artwork — the classical expression of human potential and creativity — via the modern medium of the internet,” says Koffler. “The finished product exhibited that the sum is truly greater than its parts.”