I’m sure you’ve been here before: you get an email, and it’s a newsletter you don’t remember signing up for, about a product you don’t remember subscribing to updates about. You say, “hmm, well, I’ll just unsubscribe.” You scroll down to the bottom of the email, find the unsubscribe button, and you get sent to a website where you click through a few pages that say ‘are you sure you want to unsubscribe?…Are you sure?…Are you really, really sure?” Since you’re really, really sure, you click a button and boom: Done. Unsubscribed. But then the next week the newsletter or product reappears in your inbox. Didn’t you unsubscribe? …
By Caroline Sinders, design researcher and artist
In 2019, ProPublica reported that Turbo Tax suppressed its free version, tricking people into paying to file their taxes. That same year, Princeton University launched a project that discovered over a thousand different instances of similar “tricks” that were implemented across e-commerce websites. These are what Designer Harry Brignull calls “dark patterns,” a term he created in 2010 to describe design choices and characteristics that trick users into making decisions they normally wouldn’t make. Dark patterns can be purposeful or accidental, though it’s often hard to determine the intent behind them. For example, is it an intentional dark pattern to suppress privacy when social networks bury their security settings, or are designers and engineers not sure where to place the appropriate buttons for users to find? Is it intentional that marketplaces like Amazon have price confusion, or again, is it accidental bad design? …
I’m a design researcher studying trust patterns in design. Building on research around trust and our own findings from user interviews and surveys, my team is studying communications and apps that engender or weaponize trust in technology. Trust is complex — it’s contextual, contractual, and cultural — and everyone has a slightly different definition. Understanding trust is key to creating ethical, well-designed technology that works for the common good. Read on to learn more about trust patterns in design, and if you’re a designer, take our survey and contribute to the research!
“What do you think of when you hear the word ‘trust’ and how do you define ‘trust’?” …
What I have here is a form of open research, it’s in progress, and still being researched from an ethnographic standpoint. It’s incomplete but I wanted to publish it to show progress. Please check back for frequent updates (per example, I am still adding in things for 2017). If you would like to contribute, feel free to submit here (this link will only be open for two weeks).
I think there’s never been a better time to talk about, and recount, the history of online harassment campaigns. …
Honestly, just some links and titles of what Angelina Fabbro and I’ve been reading and are trying to read in the next few months.
It’s a weird thing to write: when you’ve left your first dream job (IBM Watson) for a dream opportunity (a residency with BuzzFeed and Eyebeam) and now….you’re tacking on another dream job, how many dreams can you dream? Apparently, quite a few. I’m incredibly excited to announce starting in April I will be a researcher and designer specifically for online harassment at Wikimedia. And, I’ll be working specifically with a team using machine learning. Does this…sound familiar? It sounds eerily like my fellowship proposal, doesn’t it? ;)
Okay, it’s not a weird thing to write, it’s an extremely amazing, fantastic, and humbling opportunity. What I had proposed to research at BuzzFeed and Eyebeam, using machine learning to study, analyze and potentially mitigate online harassment is being put into actuality, and into practicality. It’s going to be realized and manifested on a large scale, at Wikimedia, for Wikipedia. …
Google’s Jigsaw group today launched Perspective API, their api to create better conversations but their first roll out, their first machine learning model is to help deem or rate conversations as ‘toxic’ or not. I’m flattening what the API could do, but Google describes it as, “…The API uses machine learning models to score the perceived impact a comment might have on a conversation. Developers and publishers can use this score to give realtime feedback to commenters or help moderators do their job, or allow readers to more easily find relevant information, as illustrated in two experiments below...” Effectively, can you alleviate moderators work flow when looking at traumatic content and also can you help surface good content. The first is easier to do, and spaces like the Coral Project are creating products and software to mitigate harassment in commenting sections, and they are doing it well. …
I found out on August 15th I won a fellowship with BuzzFeed and Eyebeam as an Open Lab Fellow and an Eyebeam project resident. What does that mean? The founder of BuzzFeed, Jonah Peretti, used to work at Eyebeam and run their R&D arm; he also ran a fellowship there called the “Open Labs.” In 2005 or 2006, Peretti left Eyebeam to start BuzzFeed and the Open Lab disappeared. Two years ago, it was restarted at BuzzFeed and Eyebeam was brought on to guide it’s facilitation. Eyebeam gets to pick one fellow from that group, and I am that fellow. On the Eyebeam side, I am considered a Project Resident, i.e. …
Last Friday, at NYU’s Skirball Center, the White House hosted a symposium on Artificial Intelligence, ethics, health, and machine learning. Led by Kate Crawford, a prinicipal researcher at Microsoft Research, and Meredith Whittaker, lead for Google Open Source Research Group. The day time events (invitation only) consisted of lightening talks from researchers at IBM Watson, Microsoft, policy makers, lawyers, artists and data visualizers such as Jer Thorp (blprnt). It was an incredibly diverse crowd, from careers to gender to race, and was something that the organizers had intended and carefully curated for the event itself. To create and germinate better discussions around AI, and to make better artificial intelligence, the group better be diverse, and AINow beyond succeeded with that. The day broke off into whiteboarding and post it note session and culminated into two later sessions open to the public, featuring White House tech liaisons to the head of Google Deepmind and well known academics as well as Intel’s Genevieve Bell. But the provocations throughout the day seemed to be: what do ethics in big data and algorithms mean? …
Today twitter announced they would be removing @s from their Tweets, so mentioning a person will take up less space. If a user wants to share the tweet with their followers, they simply have to retweet their own tweet. From Twitter’s own blog post: “These changes will allow for richer public conversations that are easier to follow on Twitter, and ensure people can attach extra elements, media, and content to Tweets without sacrificing the characters they have to share their view.” Mainly @ people will autopopulate their name and up to 50 people can be added, from what I have inferred, from people a user is following AND media added to the tweet (URLs, images, gifs,etc) will not count as additionally characters. So tweeting is still just 140 characters, but you can now “do more” with it. …
About