Fil Menczer opened the day with his keynote “The Spread of Misinformation in Social Media”.
He showed very interesting network analysis approaches to detect misinformation and bots on Twitter. Not an easy task. “Not all Twitter bots are easy to detect, some bots behave as humans & some humans behave as bots!”
@JisunAn worked on how news consumption different for people of different ages and genders (pdf). it turns out that male and female, young and senior folks are not simply different- they are predictably different!
@frostaf presented a post-hoc evaluation of a civic technology called e-democracy (pdf). it’s a platform for civic engagement. they studied which design decisions impacted the fraction of women/people of color who joined in and participated. they found that, to increase digital inclusion, it was beneficial to design the forum around specific neighbourhoods rather than around the whole city.
@david__jurgens presented a study of fitocracy (a workout tracking app with social-networking functionalities) (pdf). the main research question was how group memberships impact behaviour (pdf) ? by behaviour, they refer to: behaviour change in working out; getting help from others, and engaging in the given behaviour for a long time. The study found that high status individuals don’t benefit from group membership, and that new users have an hard time having their questions answered. Therefore, designers should design new features with which groups could answer the questions of new users.
@tweeting_cris looked at neighborhood crime in NYC (pdf). she found that crime is related to urban volatility (as measured per fraction of rented households). It would be very cool to measure a family of proxies for volatility from online data…
Roberto Interdonato presented a way to model points-of-interest in a multilayer network.
The day opened with @ginasue who discussed self-tracking technologies. She mentioned an interesting site that anyone could use to to make sense of specific types of self-tracking data http://www.makesenseofdata.com.
During the paper session, I really liked the work of Natalie Carlson (pdf): “Using 122 pitches from the TechCrunch Disrupt Startup Battlefield competition, I find that eventual funding amounts are significantly greater for those entrepreneurs who are perceived as more confident and less likable, and that these traits can be well modeled by features associated with the intensity (loudness) of their speech patterns.”
@mstrohm gave the closing keynote “Computational social science for WWW”.
He showed a simple network growth model that has an interesting predictive power.
After applying that model on Wikipedia graph, Markus showed that “minorities has a lot to gain by integrating with the larger society, and a lot to loose if they form small closed (homophilous) circles”.
@jengolbeck opened the floor with her keynote “Watching you: privacy & personalization”. To sum up her talk, there are three core issues:
Problem 1 Loss of Control. You can’t tell what your digital traces reveal (e.g., your pix might reveal your sexual orientation, the pages you like might reveal your IQ). Plus, you can’t control what social-networking sites give away. A case in point is what Facebook secretly reveals about you (log with your Facebook account on http://www.takethislollipop.com).
Problem 2 Legislation. There is a huge power imbalance between companies (those collecting data) and users.
Problem 3 Broken expectations. What people consider private or public does not always meet expectation. For example, the use of “relationship info” by algorithms is perceived to be a huge privacy violation.
Jen’s best quote: “The argument “privacy inhibits innovation” is bullshit!”
@GabrielMagno studied the YouTube channels that are dedicated to kids (pdf). Surprisingly, in them, ads are not only about toys but also about make up products! Furthemore, a big proportion of commenters are less than 12y.o. (age was inferred from Youtube profile pix), which is shocking given the age restriction that should be in place.
@miriam_fs presented her work on cops & social media (pdf). As opposed to the political arena (see Trump’s use of Twitter), cops need to be very very careful in not breaking the trust relationship they have with the public. The work is criminal!
They say that, in a talk, you should tell people “what s in it for them” the first 90sec. @SagarJoglekar (a former intern at the Social Dynamics team at Nokia Bell Labs) showed that the “90sec rule” applies to user-generated videos too (pdf).
@liciacapra closed the day with her keynote on discrimination in sharing economy platforms. the coordination in these platform is peer-to-peer and, as such, trust in others is key. The main point is that there is a balance between trust and discrimination, i.e., between (sharing personal information for building) trust and (concealing it to counter) discrimination. Therefore, there is a need for new tools that allow people to: i) easily cope with discrimination (one could engineer some of the coping strategies people already use in the real world); and ii) be aware of their own discriminative behaviour (“Unbiased self. How to become a better version of yourself.”).