10/9: We were inspired by The Pudding that created an article about the Bechdel test, and the limitations it had in exploring female representation in film. We felt that this data set could be expanded, and visualized differently.
Alison Bechdel created the Bechdel test, when it appeared in her comic strip Dykes to Watch Out For in 1985, and it became a measure used to test for female representation. It asks two questions: are there at least two named female characters, and do they converse about something other than a man? However, the Bechdel test does not comprehensively indicate how well representation is, as it was not designed to be a tool for analysis. So without naming many other tests of feminist theory, that are explored in the Pudding article, we wanted to explore more in-depth female representation in movies.
We struggled with the data set that we wanted to use, and what the baseline comparison should be. Top 10 movies? How many years? How do you define ‘best’ movies, and what source would we look at to determine that? How recent or how old should the sources be?
We originally started building a dataset of the best critically ranked films from 2018 (as 2019 would not have been available yet) with the intention of comparing the list to 2013 and 2008 (for a total of 30 movies). This data set was limited, and did not reflect popular culture that well—some of the films we hadn’t heard of before. In order to be more indicative of a broader cultural reaction, we changed the Top 10 best to Top 10 highest domestic grossing films, with the assumption that if a movie earned more ticket sales, more people were interested in watching it.
Our data set thus became the top ten highest grossing movies from the years 1998 -2018, and we looked at how those films rated according to the Bechdel test, to see if there was any improvement in female representation over the years (especially in light of more recent movements, like #MeToo). We manually collected the data and decided to ask 2 additional questions: Was there at least one WOC? Was the lead a female?
As we collected data, it became more apparent how the Bechdel test was flawed because it only demands small modifications to the narrative events in the movies we watch, and doesn’t ask for any deep, structural changes. (For example, two named female characters who talk about the weather, technically passes the Bechdel test, but shows no growth, depth or exploration into their characters.) So we decided that we wanted to visualize our data in the form of a story/tool that would expose its flaws but still provide valuable insight to how the movie industry has fared in recent years.
We had to grapple with a few more questions, and clarify the method for data collection, the more we looked into the films. For example, if a character is female but is played by or voiced by a man, how does that define the film in terms of the Bechdel test? [In the end, we counted it as a female character, but we know this is problematic.]
We asked if the lead was a female character, but then had to define what a ‘lead’ was. Ultimately, if the story was not predominantly told through a female perspective, it was not a movie with a female lead (Ant Man and the Wasp had very strong female characters with major roles, but we decided that Ant Man’s story was more central to the overall narrative, and therefore did not have a female lead. But for ensembles like the X-men, we tried our best to see what role a female character had overall). We tried our best, but it is grey area, and difficult to totally explore within our project scope. Also, we acknowledge that ‘female’ is also subjective, in terms of gender; we based this off of whatever Wikipedia stated for each film, and in the end, majority of the films stayed within a gender binary. We asked it there were any women of color at all in the films, to explore beyond a white female character; however, this does not take into account what that character’s role is in the film. For anthropomorphic films like Zootopia and Sing, we defaulted to the cast.
How do we represent all of this information?
Using rawgraphs.io, we input our csv file and played around with different graph types. The one we thought had most potential was the alluvial graph, although the site had its own constraints.
Questions for the future were: how do we now create our own version of this graph that we can manipulate and ID each individual line, instead of the collective that rawgraphs.io creates, and how do we display this for user interaction?
- Flow top left: A 3D rotating graph that filters movies by our criteria.
- Flow bottom left: A vertical alluvial diagram that allows the website visitor to pick a movie it and follow it down the page.
- Flow bottom right: Every movie is represented abstractedly as flowers, movies that do better in terms of representation have more petals, leaves, etc., creating a metaphor for “growth.”
Tutorial Feedback 10/21
Out of our explorations, people preferred the alluvial graph. “I like that it’s going down. Graphically this could look nice as snapshots of the graph and zoomed out. Allows the user to pan and zoom.”
Finalizing Visual Style and Layout
This was hard, and we don’t have the necessary web development skills to fully implement, so we tried our best! Most likely, some features will be coded, but majority prototyped.
- Originally, Alice tried using p5.js to create the image of the actual data visualization. It helped us understand how to filter the external dataset and create the lines, but was difficult for interaction (too hard to click on an individual line!) We decided to try to use paper.js over p5.js to generate a vector image/svg for implementation. This was a trying process, learning how to use paper.js, but in the end an svg was made! :-)
- Audrey explored using Webflow, to create a prototype that then exports into HTML. In the end, it wasn’t used.
We struggled to get the rest of the site workable in time, but we have a clickable prototype using Flinto that gets a sense of what the interactions would be like.
In reflection, the topic that we explored was difficult to grade on a hard binary, which we discovered as we were gathering the data. We tried to integrate some of the interesting outlier points into the site itself (ie. the callouts to Twilight, Gravity) and looking forward, it would be fun to integrate more into the experience. We’d like to continue trying to implement the site.