#tfw your side hustle becomes your main gig
I found out on August 15th I won a fellowship with BuzzFeed and Eyebeam as an Open Lab Fellow and an Eyebeam project resident. What does that mean? The founder of BuzzFeed, Jonah Peretti, used to work at Eyebeam and run their R&D arm; he also ran a fellowship there called the “Open Labs.” In 2005 or 2006, Peretti left Eyebeam to start BuzzFeed and the Open Lab disappeared. Two years ago, it was restarted at BuzzFeed and Eyebeam was brought on to guide it’s facilitation. Eyebeam gets to pick one fellow from that group, and I am that fellow. On the Eyebeam side, I am considered a Project Resident, i.e. a resident from a particular collaboration from an outside entity with Eyebeam.
So I did what any person would, I promptly burst into happy tears, which I hadn’t had since I found out I got into my master’s program, ITP, in 2012. So in September, I quit my dream job at IBM Watson, took a 6 week long fellowship with the Mozilla Foundation’s the Coral Project, to help prototype and design better conversational tools to improve the commenting sections of newspapers and prepared to move across the country to San Francisco for the Open Lab fellowship.
The past few months (August to October) has been insane. I attended SXSL, spoke at the Conference and Open Hardware Summit, got a fellowship with the Coral Project, presented a game at Weird Reality, meet so many rad people and moved across the country.
I’m excited to be a full time resident at both Eyebeam and BuzzFeed, but incredibly excited and honored to have my side hustle, my research into bots, harassment, language, and machine learning becoming my day job. I’ve spent the past year lecturing, teaching, designing, and researching but having these two tiring lives. My 9–5 where I worked at IBM Watson as a design researcher, specifically on AI, bots, and machine learning and then in my 6–11, where I focused on design research, interviews, activism and art on online harassment and feminism. It’s been amazing, but tiring, to try to do both. And it’s even more fantastic to make all of this my main gig.
I’ll be doing a lot of things will I’m at my fellowship: exploring an anti harassment prototype, testing interfaces for machine learning, completing templates for that, making bots, and doing a series of art projects and interventions around surveillance culture. All of it will be open source.
I had applied months earlier to the residency with a provocation- can machine learning help mitigate harassment? Can we try to identify patterns of conversation and distill it into programming and mathematical parameters to give moderators better data? What the hell are mathematical parameters? Things like have these users ever interacted before, how old are their accounts, what locations are they from, (for a newspaper) what verticals do they comment on or which journalists’ articles do they mainly comment on, etc. Can we look at relational data of users to see if, or how much of harassment, can be a probability defined by how separated users are in an ecosystem. How do we make machine learning systems transparent and not arbitrators of surveillance? And how do we make these systems ethical? Is it possible? I’m making this entire system with Angelina Fabbro and my research to find out. Again, it’s all open source.
Related to this project, I am creating a series of open source machine learning UX and UI templates, with the current name ‘temperamental ui.’ The UI and UX will change when the data does. I’m interested in seeing how it will change and what design will look like when it’s optimized for transparency around data and not for minimal efficiency. This question has been plaguing me for a year: can I create/test ethical UI interfaces for machine learning applications that are applicable to *all* users, not just technical ones? Can it be informed by nature? Can the UI constantly be shifting like the data? Can it be ethical? Ethical meaning transparent, ethical meaning making users aware of what data is being used, and easy to understand, open source so everyone can use it. Machine learning is a black box of information and technology that is using big data- user data, all sorts of data, and that has ethically ramifications, AND it’s going to radically change product design. So, how can I create templates for designers and technologists who don’t have a background in machine learning, that is focusing on ethically uses of data and can I share it? The wrong data set with a not great algorithm will create false patterns in data; it will create inaccuracies. If a person doesn’t understand machine learning or data science, they may never pick up on those false patterns. There are a lot of thoughts here to unpack, but let me caveat again: all of my research, programming, and test will be open source (though some things will be anonymized to protect my users and participants). Will my templates ‘work’? I’m excited to find out.
The third series of related projects/theme is is on web surveillance and physical space surveillance. The web surveillance project is a series of chapters/vignettes in a VR game called Dark Patterns that I am building with Mani Nilchiani (who’s also my best friend, natch). We already made our first chapter. Another project is me going neighborhood by neighborhood in San Francisco and photograph all of the surveillance cameras I see, plot their locations, makes, models, and correlate that to economic and racial demographics in the area. Then I want to run a neural net over the image and geographical location data set to see what similarities pop up from an algorithmic perspective. Is it placement, is it location, is it colors and kinds of cameras? I have a lot of questions on this subject, what does it mean to be tracked and watched in public spaces. I started my career off in photography, so the two above projects are close to my heart (our VR game is made almost entirely out of photographs). Additionally, for the surveillance SF project, photography taken from the street level is necessary because I want to highlight this is what people see when walking, and this is what surrounds us from the human level, as opposed to scraping and finding google images from a bird’s eye view of the locations.
And the final project (one I’m most excited about) is working with Dan Phiffer (also currently at Eyebeam as an IMPACT resident) on data ownership in social media. We are calling the project “Small Data,” and it’s situated around this idea of equal ownership, of a CO-OP model, inside of social media. We are building a small series of prototypes on raspberry pis. So much of my personal research over the past few years has been around social media, design interventions on harassment, and studying human conversation in those spaces. What makes me excited about Small Data is the idea of building something really really small, something not designed to be scaled up and that focuses on transparency and data ownership for users by users. It’s like making a zine instead of trying to make an entire bookstore.
So for the next year, I’m going to be ‘tri-coastal’ (insert obvious joke of trying out all of the coasts) between the East, the West, and the Gulf. So, um, follow me on this weird journey! I have a tiny letter. Keep up, reach out, I’m always looking for feedback.