Behavior and Self-Concept as mediated by Social Media

The other day I received a curious notification on Facebook:

Someone reacted “to that” I’m interested in some event.

I hadn’t concretely done anything aside from declare my interest. Or has declaring interest become enough of a performative act that it’s up for scrutiny and reaction by my Facebook friends? This puzzled me for quite a bit — why should I be rewarded for intent, before even following through on the action? One of the lessons I learned throughout elementary school was that making promises wasn’t sufficient, and that we need to follow through on them. Here, though, my intention has been presumably been made public on my friend Neelima’s news feed, constructing an imagined reality in her world in which I will be going to the event. Does my believing that she believes I will be going to the even change my behavior?

NYU Psychologist Peter Gollwitzer has done research finding that when we tell other people our intention, it gives us a “premature sense of completeness” that lowers the likelihood of our actually following through on the intended activity. The implications of his research have primarily fallen on motivation and how if we want to achieve goals, we should keep them private; That is, don’t tell you friends you’re going to go to the gym next time you’re planning to go — tell them afterwards.

It is problematic, then, that a website that millions of users log onto every day has systematically put a premature reward mechanism in place that no doubt leads to decreased follow through. In this post, we will consider the ways in which Facebook can mediate our behavior and self-concept in unexpected ways.


In the time that I’ve used the social networking site, Facebook has changed in a large way from a site on which friends choose to share various pieces of information with their other friends, to one where every action you take (or don’t) on the site effectively gets scrutinized by both Facebook as a product and your friends. As above, people can now react to your future behavior, and even see the ways in which you have reacted to content that is not your own. On the other side of the coin, by announcing via notifications and the news feed that certain people will be attending events in real life triggers a potentially unhealthy FOMO in the viewer.

At every step of this reciprocal process, Facebook and its algorithms have made decisions about what kinds of events they should bubble up into my feed and notification list; and these invisible decisions matter more than users may realize.

Suppose the Facebook algorithm had started suggesting I attend protests for some movement of which I was initially unaware (perhaps increasing the viewed frequency of said protests even just 10%). Facebook could, in theory, subtly push whatever agenda they wanted on their users without the users knowing. Even if this weren’t intentional, and the end result of some benign algorithm were that some movement were made more visible, what are the ethical implications of the software engineers that wrote the data processing pipelines for the machine learning algorithms that came up with those decisions? There’s huge potential for gaslighting here that needs to be carefully considered, where every user must inherently just trust Facebook and its employees.


One recent differentiation in content curation I’ve noticed is that once you decide on a content source, within that source you can generally have the content entirely curated for you by the source, or pick and choose what you’d like to see. For example — on hacker news, all of the content appears on one feed, and there’s no choice involved. On reddit, on the other hand, users are able selectively follow subreddits and curate their own experience of the site.

When users are selectively able to choose the kinds of content they will see, they are at risk of shutting themselves into an echo chamber. Perhaps, in some paternalistic way, this should be avoided, but in the purest sense the users get what they wanted.

However, when the control is entirely in the hands of the application or website, it’s important to consider how that might manifest; For example, bing.com presents search terms at the bottom of the main news page, as if declaring that these are news articles ‘worth viewing’ — the danger here is that there’s no promise to be unbiased or actually ‘worthy,’ aside from them being “Popular now.” In a similar way, Facebook presents information to us via the newsfeed and notifications list, except here in a highly personalized fashion. Should we assume that, because Facebook has so much information about who we are and what kind of content we like, that these factors will be taken into account to create a ‘good’ feed? And how would we define ‘good’ in this content? Based on user happiness? How long the user stays on Facebook?

When we relinquish control to computers — what is it that we expect to do for us?


The sheer amount of information Facebook and other applications can gather on us is unfathomable to the casual user. There’s lots of potential to use this information in meaningful ways that users want. However, there’s also information that users simply do not need or want to know that should remain hidden.

Two concrete examples of companies backtracking on user data sharing come from Facebook Events and Snapchat’s “Best Friend List.” When looking through a Facebook event’s guest list, it used to be plainly visible who invited whom; this feature has since been quietly removed. Snapchat used to have a feature called the “Best Friend” list, which made it plainly visible who people were sending the most snaps with, just by looking at their profile. Snapchat removed that removed that feature earlier this year.

In both cases, these seem to be reversals based on the companies realizing that people were either abusing or at the very least overthinking the information that was presented; most people don’t really want to know (or at least be told by Facebook), if they were invited to an event, whether they were just one of the 100 people that their Facebook friend invited, or whether it was a more personal invitation of which they were one of 10 or so invited by an individual. How might knowing this change my relationship with the inviter, either way? Do I really want to know? Even if the user seeing this inviter/invitee relationship was not one of those invited to the event, they have the potential to map out relationships by tracing the invitee relationships in a nearly pathological way. In a similar way, Snapchat’s Best Friend list exposed aspects of people’s personal relationships via their snap frequency that they potentially did not want revealed.

When building these features, the people behind them have a responsibility to use the vast amount of information available in a responsible and careful way that will not reveal more about the users than they were expecting or intending. Otherwise, you risk deteriorating real human relationships — putting perceived friendships on the line, for example.

Based on the mountain of information on who you are, what you like, and who you communicate with, Facebook has built up a model of exactly who you are, and what kinds of content you engage with most. Presumably, they know what kinds of advertising is most effective against you, and how they can most effectively profit off of your presence on the site. Facebook was under fire a few years ago for manipulating users’ emotions experimentally, and it’s not unreasonable to think they could modify users’ perceptions of self vis-a-vis the content they see and are rewarded for by other users, in addition to the subtle ways discussed above. If Facebook decided to show me my friends posting medium articles frequently (perhaps they were before, but the algorithms had decided not to display them until now), would that nudge me to do the same, taking on a new behavior to “keep up with the joneses?” And if Facebook’s algorithms chose to show photos of my engaging in activity A more frequently than photos of me in of activity B — leading to more likes and Facebook reactions for A, might I be encouraged to do more of A than B, regardless of my personal taste?

Admittedly, these considerations are a bit Skinnerian, but even small effects on a page like Facebook matter because of the breadth of its impact.


These considerations are not limited to Facebook; Any application that users go to and willingly provide information about themselves to get a curated experience — including Youtube, Twitter, and Reddit — should be scrutinized to the extent and manner in which they use a users information both for and against them. Developers of these features need to keep checks and balances on the ways in which this information is used. And the users themselves should keep in mind the potential ways in which websites can subtly influence them to behave or think in ways in which they would not have otherwise.