Facebook and the Problem of “Privacy” in the Social Media Age
What Facebook’s F8 roll outs mean for their data and privacy problem.
Privacy, Data and Misinformation
It’s clear that Facebook has a public opinion mountain to climb, and this week we saw them reach the staging ground at F8, their annual developer and entrepreneur conference. Facebook’s “new” focus on privacy and ethical policing of content is a salvo jam-packed with intent, but only time will tell how effective it will actually be.
When it comes to people understanding the larger implications of their personal data and privacy, it’s hard to put the genie back in the bottle. Today’s users know enough to have concerns about their privacy, even though they don’t know what “privacy” really means in the context of a social platform (or a digital platform). When it comes to privacy, we generally know that each new approach will work until it doesn’t. Even after the rollout of the new features presented at F8, Facebook will be back to square one after the next privacy scandal or “tainted” election inevitably hits. Still, many outside observers will look to the new visual styling and interaction models and say these things will determine the success of the initiative. However, they won’t. Facebook’s success will depend on how well they incorporate an iterative design process that will allow them to prepare for the next breach. If the spread and proliferation of harmful misinformation is their biggest problem now, they will need to consistently create “antibodies” for the new super hate/misinformation groups that will develop immunities to whatever censoring model they come up with next.
At frog, we struggle with the very questions of privacy and ethics in many of the challenges our clients bring to our door. While I may be able to say that every designer I’ve worked with is well-intentioned, it takes maturity and foresight to ethically chart the arc that our design decisions can have past the initial concept and delivery phases. For example, when working with the UN on their data platform OCHA, we debated questions not just related to the data security of the product, but also the data ethics around the product. Questions like ‘Does the good that will come out of collecting certain data points about a sensitive refugee population outweigh the harm that can occur if that data lands in the wrong hands? How can we keep the good while mitigating the risk? These are hard questions to answer, and even harder to answer when you are a publicly traded company. So while the larger issue of data and privacy remains a challenge for Facebook, it is also an opportunity to take a stand and begin to define the grey area of “public” vs. “private” data when it comes to its global users. Is it time for Facebook to intentionally shape the definition of privacy for everyone in the social network space?
What they are doing with the latest redesign is attempting to combat the hijacking of people’s newsfeeds by misinformation, announcing that they are moving away from the town square sharing model to a more “private” groups-based model. In theory, this new interaction model will allow Facebook to isolate groups of bad actors as opposed to censoring grandma for sharing a Russian Trojan horse. But these groups are still made up of humans. And humans — whether intentionally or not — always have the ability to become bad actors. Facebook may intentionally try to dampen the wildfire of misinformation that ripped through newsfeeds across the country, but it’s a much harder task to help people break out of their own ideological bubbles. Groups may end up fueling much more intense, but contained fires. While this new approach frees Facebook from the responsibility of acting as the “content police,” it relies on users to self-regulate, which we know is no easy task. But how do you get to the root of the problem when we as humans are the problem? We believe that it’s the responsibility of designers to design for better human interactions, not just better product or service reactions.
Integration and Fragmentation
One part of this week’s announcement concerns user experience in the integration of all the messaging services under the Facebook corporate umbrella. While we can expect to see greater interoperability between these apps in the near future, it poses some interesting product questions. Is Facebook aiming to eventually consolidate platforms such as Instagram and WhatsApp? Or will the interoperability allow Facebook’s design teams further focus on each platform’s unique value proposition, resulting in a larger departure between them in both feature specific and user interface? My hunch is the former, knowing that a consolidated system will allow Facebook to optimize its delivery and overall product brand by reducing fragmentation. This would be more in line with what we see in the broader tech industry today, which we see with our own clients day to day. From a user experience standpoint, it’s a win — you won’t have to choose a specific product to have communication access to your entire social network. And so far, it’s a privacy win as well, given that Facebook is promising to ensure data encryption on all messaging platforms. But again, while it sounds good on a product level, Facebook has yet to define more specifically what “privacy” means. This is in part because data privacy in a world where data is monetized is hard to define. Just because Facebook is promising to encrypt the content of our messages, doesn’t mean they’ll encrypt other residual data points, like who we’re messaging, from where, and at what times.
As Facebook begins to mitigate the privacy concerns of their users with their business model based on monetizing that data for advertisers, it will be imperative for them to define privacy in terms of human experience and behaviors, rather than simply rolling out band-aid fixes to help the platform react to those experiences and behaviors.
Henry Martes is an Associate Creative Director at frog, focused on digital product design and interaction design.