Algorithmic Credibility: The Hidden Attraction of Technology — Full Paper

Anselmo Lucio
Custom Communication
24 min readNov 17, 2021

--

Photo: Player.One

This is the English version of my article, published at the XII International Conference on Online Journalism (Ciberpebi 2020) on ‘Disinformation and credibility in the digital ecosystem’. Available in Spanish from page 364 of the proceedings: https://www.ehu.eus/documents/3399833/0/CiberpebiXII.pdf

Abstract

Algorithmic credibility would be perceived in those digital environments where algorithms decide in the shade what is relevant to the user. The concept of ‘algorithmic credibility’ is very recent. This is mentioned by IBM researcher Christine T. Wolf in a 2016 report on ‘algorithmic life’, a research approach that considers “that encounters with algorithmic systems can be conceptualized as lived.” Wolf proposes to take practices as a central analysis unit and thus better understand the algorithms as found in everyday life. Research on algorithmic credibility is framed in the field of ‘human-computer interaction’ and ‘user experience’. Seven perceptions have been identified when interacting on Facebook that can generate credibility induced by the algorithms of their News Feed, the source that delivers content and advertisements to the user: relevance, trust, transparency, personalization, affinity, credibility and network effect. Unfortunately, algorithms-mediated credibility serves to drive both true and false information, as shown by the abundance of misinformation on social media, platforms, and other websites.

1. Introduction

The role of the algorithms in the ‘user experience’ (UX) has attracted the attention of researchers in the field of ‘human-computer interaction’ (HCI) in the last two decades. Already in 1999, Brian Jeffrey Fogg and Hsiang Tseng observed how the findings on the credibility of human interactions could be applied to man-machine relationships.

Christine T. Wolf, from the IBM Research, has coined the concept of ‘algorithmic life’ to express “that encounters with algorithmic systems can be conceptualized as lived.” In turn, the ‘algorithmic credibility’ would be framed in the ‘algorithmic encounter’, for whose study Wolf proposes to take practices as a central analysis unit and thus better understand the algorithms as found in everyday life.

This practical approach allows identifying credibility patterns originated by algorithms, as well as their characteristics. For example, the Facebook News Feed (the content that each user sees) or the Google search results are the product of algorithms that perform the same function as journalists: select content that is relevant to the user and determine in what order they are received. Precisely, relevance is a clear credibility factor.

In online communication, social networks and platforms of all kinds use algorithms and artificial intelligence to attract attention using the best attributes of credibility, from the recommendations of loved ones to the usefulness and gratuitousness of many applications that make life more easy. But these arts of seduction go unnoticed by most citizens when they go online.

2. Backgrounds

2.1. User experience

When people use digital technology, they use devices equipped with interfaces with which they communicate, for example, to look at an email that has just arrived on their mobile phone or to surf the internet on a computer. “An interface is a contact surface that reflects the physical properties of those who interact, the functions to be performed and the balance of power and control.” (Lorés, y otros, 2002 pág. 4).

The discipline that studies interfaces is Human-Computer Interaction (HCI) or Computer-Human Interaction (CHI). Thanks to advances in HCI, the web and applications now work with windows, hypertext, mouse, spreadsheets or gesture recognition, among many other functionalities.

A user interface must be easy to handle and easy to learn, which is technically called ‘usability’. This concept has been associated with professional tasks and particular uses of devices, from computers to cameras, but with the development of the web, video games, music applications or video platforms, human-computer interaction has become into something much richer, more varied and fun than mere usability and this has come to be called ‘user experience’ (UX).

The UX concept, attributed to Donald Norman (1995) (Serrano, 2018), goes beyond usability and also takes into account aspects such as entertainment, conversation on social networks, games or artistic creativity; It also considers the subjective reactions of users to digital systems, their perceptions and interactions, and is particularly concerned with the positive aspects of interaction with machines and how to enhance them, whether they are participation, curiosity or fun (Petrie, y otros, 2009 pág. 4).

The move from usability to user experience is the result of the evolution of more or less simple algorithms to Artificial Intelligence (AI), which are complex algorithm systems capable of offering more satisfaction to Internet users, such as product recommendations, spam filtering or personalized search results.

2.2. Algorithms and Artificial Intelligence

Research on human-computer interaction (HCI) is parallel to studies on ‘web credibility’ since the 90s, especially from the publications of Brian Jeffrey Fogg and Hsiang Tseng on the subject, among which the conference The Elements of Computer Credibility (Fogg, y otros, 1999) delivered at the CHI conference held in Pittsburgh (US). Fogg and Tseng looked at how the credibility findings in human interactions could be applied to human-machine relationships.

However, until the last decade academics have not focused on the role of algorithms and their underlying effects for Internet users and users of devices connected to the Internet. Lluïsa Llamero made a mention of the effects of “technological mediation” on the credibility of the person at the V International Congress of Cyberjournalism and Web 2.0 in Bilbao in 2013:

There are authors who warn of the influence of technical factors that for most users go unnoticed. A paradigmatic example would be spam filters that, based on algorithms incomprehensible to end users, prevent many emails from being viewed (Llamero, 2013 pág. 405).

The algorithms that mediate social communication, electronic commerce or mobile applications of all kinds are a new “communication technology”, according to Tarleton Gillespie in his article The Relevance of Algorithms (Gillespie, 2014 pág. 169). In essence, an algorithm is software (programming code) with instructions to perform a task or solve certain problems. This software receives input data that it converts into different output data. Normally, users only know the output data or result in the form of an action, which is why algorithms are seen as “black boxes”.

To work, the algorithms feed on the data that users themselves leave more or less unconsciously on devices and other places in cyberspace when browsing the internet, as well as other public or private information often coming from the offline world.

The main characteristic of artificial intelligence is that it allows making predictions and anticipate the wishes of users, as well as countless utilities that make life easier, such as knowing in real time which route is less congested by traffic or, in the media, knowing which headline among several is more popular.

2.3. Computational disinformation

Before continuing, it is important to point out that the terms commonly used to refer to digital communication, such as algorithms, artificial intelligence, the cloud, the network or cyberspace, are metaphors for a reality that is too complex to describe in a paragraph text, with many peculiarities and totally conditioned by the social and cultural context. Likewise, abstract terms such as credibility, reputation or trust are also metaphors that must be limited to specific cases to reveal their true meaning.

Another metaphor is the term ‘fake news’ because it can mean at least seven different kinds of things: false connection (when headlines or images do not support the content), false context (genuine content with false contextual information), manipulated content (information real manipulated to deceive), satire or parody (not intended to harm), misleading content (information with cheating, simulated, fraudulent), impostor content (when the source is spoofed) and fabricated content (totally false, designed to deceive and harm) (Mezei, y otros, 2020).

Although false news has always existed and the traditional media were protagonists in history for causing great disasters and even wars after publishing tremendous lies (for example, Hearst’s newspapers in the Cuban war between the United States and Spain in 1898 or Saddam Hussein’s never-found “weapons of mass destruction” (Hein, 2018) in the 2003 Iraq war), what is happening on the internet today with ‘computational disinformation’ is on an incomparably larger scale.

The global scale of algorithmic technology coupled with the speed of propagation of the internet itself has multiplied the damage of disinformation in societies, polarizing and confusing citizens, who no longer know how to distinguish many times between true and false information.

Social networks, search engines and mobile applications use information filtering algorithms to personalize the content that users receive, the consumption recommendations they make and the ads they place, including many false news that are inserted for payment focused on segments of the Internet population on Facebook, Google search results or on Twitter. Despite the fact that after the 2016 US presidential elections and the Brexit campaign in the UK these platforms have made an effort to control political propaganda, their results are still not satisfactory and there are many states in Asia, Africa and America where the algorithms-mediated manipulation continues to sweep, as in Brazil for example.

In 2020, Facebook tried to counter electoral disinformation targeting Tunisia, Togo, the Ivory Coast and seven other African countries, but accidentally deleted accounts of dozens of Tunisian journalists and activists, some of whom had already used the platform during the ‘spring Arabic’ from 2011. While some of those accounts were restored, others remain closed, according to the Electronic Frontier Foundation (EFF).

One of the fundamental causes of the spread of misinformation online are the algorithms that decide what content users see and when, so platforms “must start by empowering users with more individualized tools that allow them to understand and control the information they see”, defends the EFF, which believes that platforms should open their application programming interfaces (APIs) to allow users to create filtering rules for their own algorithms. “The media, educational institutions, community groups and individuals should be able to create their own feed”, says the organization for the defense of digital rights (McSherry, 2020).

Emily Sharpe, Policy Director at the Web Foundation, called in January 2020 for Facebook to suspend hyper segmentation of political ads, as had already been done by its direct competitors such as Twitter, which banned all political ads, and Google, which now only supports the basic demographic orientation in political advertising. “It is extremely difficult to challenge or verify claims made in ads that can only be seen by a small handful of people”, the Web Foundation warned, noting that the 2016 Trump campaign served 5.9 million ad variations in just six months (Sharpe, 2020).

2.4. Algorithmic credibility

It has become clear that with the advent of algorithms and artificial intelligence, media technology has ceased to be a constant to become a variable, “in fact, a set of variables” (Sundar, 2020 pág. 6). Those who control the algorithms are constantly changing them and the message is different depending on who the recipient is due to the personalization of content and advertisements. And not only are the algorithms ‘black boxes’, but it is also impossible to know what message each of the recipients receives at all times, except those who control the website or platform in question. Faced with so many unknowns in digital communication, almost the only thing that can be studied independently is the user experience.

An IBM Research researcher, Christine T. Wolf, has coined the concept of ‘algorithmic life’ to express “that encounters with algorithmic systems can be conceptualized as lived.” ‘Algorithmic credibility’ would be framed in the ‘algorithmic encounters’, for whose study Wolf proposes to take practices as the central unit of analysis and thus better understand algorithms as they are found in everyday life.

Pierre Bourdieu’s Theory of Practice, on which Wolf relies, is a sociological approach that sees the creation of meaning as an interactive process, a “habitus”, in such a way that people’s previous experiences with algorithmic systems and their daily encounters with them would provide them with a vision of their own.

Instead of a conceptualization of algorithms as stable objects, delimited within specific platforms, this approach offers information on how the algorithmic encounter is lived in practice. Such insight helps us to interrogate issues of algorithmic trust and reliability, the subject of this workshop, prompting consideration of the ways in which the notions of algorithmic credibility and trust are relational, that is, how trust is influenced by technological habit of individuals or existing frames of reference (Wolf, 2016 pág. 2).

This reference by Wolf to the concept of ‘algorithmic credibility’ is the first I have come across in the academic literature in English and Spanish. Other concepts that have emerged since 2016, in addition to ‘algorithmic life’, are those of ‘algorithmic imaginary’, ‘algorithmic practices’, ‘algorithmic tracing’ and ‘algorithmic experience’.

Taina Bucher defines the ‘algorithmic imaginary’ as “the ways of thinking about what algorithms are, what they should be, and how they work” (Bucher, 2016 pág. 30). Studying the case of Facebook, she identified various ways in which users reacted to her algorithmic behavior. Thus, Bucher cataloged the feeling of being classified or profiled, as when a middle-aged woman is bombarded with a weight loss advertisement (“profile the identity”); the situation in which people feel that the system has “found” them, such as when the user is drinking coffee and an advertisement for a coffee brand appears on Facebook (“amazing moments”); when the user feels that the algorithm is wrong (“faulty prediction”); when users think they don’t get enough ‘likes’ or ‘shares’ due to algorithms (“popularity game”); the feeling that the algorithms are insensitive, for example remembering the birthday of a deceased relative (“cruel connections”), and the perception that they lose control of relationships with their friends because the algorithm shows the posts of some friends and not those of others (“friendships ruined”).

For his part, Angèle Christin analyzed how algorithms are used in two different fields, web journalism and criminal justice. In journalism, real-time analytics programs like Chartbeat, used by more than 80 percent of web publishers in the United States, provide detailed data on online reader behavior and make recommendations about when to promote articles. In 2017, in US criminal justice, there were more than 60 instruments predicting the chances of recidivism of defendants or convicts.

This comparative approach aims to study the uses and processes of meaning creation surrounding algorithmic tools in life and work environments. Despite the many differences between the two domains, the work revealed that in both web newsrooms and criminal courts there are discrepancies between what managers promise about algorithms and how workers actually use them (Christin, 2017 pág. 2).

In the field of ‘human-computer interaction’, Oscar Alvarado and Annika Waern coined in 2018 the concept of ‘algorithmic experience’ (AX) “as an analytical tool to address a user-centered perspective on algorithms, how users perceive them and how to design better experiences with them.” (Alvarado, y otros, 2018 pág. 1).

A recent study has found that YouTube user’s viewing history substantially affects the algorithm for recommending new videos and that pseudoscientific content is more likely to be displayed in search results than in other parts of the platform, such as the recommendation engine or user home page (Papadamou, y otros, 2020).

On the other hand, Benjamin N. Jacobsen has developed the concept of ‘algorithmic tracing’, which would be the narratives that artificial intelligence can generate about individuals, for example by editing photos and videos to create a personal story. Research has focused on the Apple Memories feature of the IPhone to “analyze the ways in which people’s lives are made sequential, orderly, and ultimately meaningful and actionable through algorithmic processes.” (Jacobsen, 2020).

It is also possible to study the experience of users with algorithms using automated techniques, for example browser extensions that copy the content or advertisements that a social network or a website shows on a screen (‘scraping’). For example, the site specialized in the social impact of technology The Markup announced in October 2020 the Citizen Browser Project to study how disinformation reaches users through YouTube and the Facebook News Feed. A total of 1,200 US Internet users have been hired to install a browser designed for the occasion on their computers. The Markup has partnered with The New York Times to analyze the data and report together on what content both platforms choose to extend and who they teach it to (The Markup, 2020).

3. Methodology

This study is based on a qualitative analysis of the Facebook News Feed, both from the user experience and through the documents that the company itself has published within the framework of its transparency policies and the patents that it has registered and that are made public after two years of validity. From concrete cases, it is about finding patterns that explain the perceived reality.

Once the main algorithmic systems that intervene in an underlying way in the News Feed (data collection, data storage, data processing and advertising segmentation) have been identified, the effects they produce on the user experience are analyzed in search of qualities or elements. common with credibility, understood as a subjective perception related to a series of attributes described in the research of the last 25 years on ‘web credibility’ (trust, relevance, affinity, transparency, usefulness, familiarity, etc.).

4. The Facebook News Feed

Facebook is one of the four major platforms that dominate the so-called “social internet”, along with Google, YouTube (both from the giant Alphabet) and Amazon. Facebook had 2.603 million active users worldwide in March 2020 out of a total of 4.57 billion internet users; in other words, 57 percent of Internet users around the globe use the platform chaired by Mark Zuckerberg. And its activity, mainly advertising, reported to the platform in 2019 total worldwide revenues of $ 70.5 billion, 26 percent more than in 2018, according to Statista.

It is evident that Facebook has become a decisive means of communication for the democratic health of societies that, however, relies on its status as a social network and a private company in order not to be held accountable for how it filters the content that users upload, how it shows them to each one and to whom each ad is targeting. The platform remains hermetic in the face of political and social pressures while its power of persuasion does not stop growing everywhere (Ryan-Mosley, 2020).

The Facebook News Feed are the algorithms that determine the content that each user sees and the order in which they appear on their screen, including advertisements. In addition, the platform makes recommendations for pages and users to follow. The default content delivery shows the featured topics, but the user can choose to view the most recent posts. It is also possible to configure which friends to see the contents of first.

In addition, as users provide Facebook content, the platform has a content moderation policy that filters offensive messages, pornography and lately North American political content for the 2020 presidential elections. This function is performed by algorithms with human supervision.

Regarding ads, the social network has a Help Service with a section on “Ads preferences” where the visitor is informed how to manage the reception of advertising and why some advertising messages are shown instead of others. To decide which ads a user sees, the Facebook algorithm takes into account their activity, for example if they ‘like’ a page or click on an advertisement.

The algorithm also processes the information that the user consciously or unconsciously provides when entering Facebook, from gender and age to the location or devices that he uses to access the platform; the information that third parties (advertisers, their partners and Facebook marketing partners) share with the social network and the activity that the person performs outside of Facebook, if authorized (Facebook, 2020).

All these functions are performed by a complex algorithmic system, proprietary software that governs what its more than 2.6 billion users see on the social network. Although it is secret, it is possible to know a part through the application programming interface (Facebook Graph API) with which the platform interacts with external developers. In 2016, a team from the Share Lab, a research laboratory in the Serbian city of Novi Sad, used the API and the analysis of some 8,000 Facebook patents to draw a rough map of “Facebook’s algorithmic factory” (Joler-1, y otros, 2016).

According to the Share Lab researchers, Facebook performs four major functions: data collection, storage (Social Graph), algorithmic processing, and segmentation or targeting. The algorithms analyze the user activity that is stored in the “Action Store” (‘likes’, locations, friends who make, events in which they participate) and the load in the activity interest extractor (“Action Interest Extractor”), which does a list of user interests. Likewise, they analyze the content and comments published by users, which are stored in the “Content Store”, and classify them according to two categories: topics and keywords or tags (Joler-2, y otros, 2016).

With all this information about what each user does and publishes, Facebook creates user groups or “seed clusters”, which are used to target users and show them specific ads or to select the content that the News Feed serves them. For example, a cluster may integrate people looking for rental housing in a certain locality and then there may be more specific clusters or subgroups such as users seeking rental housing in a city with higher education or singles.

Advertisers have three basic profile targeting options on Facebook, according to the Share Lab: basic information (location, age, gender, and language), detailed targeting (based on user demographics, interests, and behavior), and connection targeting (depending on the specific type of connection to Facebook pages, applications or events) (Joler-3, y otros, 2016).

When advertisers buy space on Google or Facebook they bid, such as in an auction, and pay for the number of clicks users make on their ads, so the platforms optimize the bidding process so that the ads only show up to the people who are most likely to click on them. Therefore, when a Facebook user sees the news that her friends publish, the algorithms are showing her the ads paid by the highest bidder for her user profile (Kayser-Bril, 2020).

With all the aforementioned pieces of the “Facebook algorithmic factory” well oiled, the result of the News Feed is a ‘user experience’ characterized by the following elements:

1. News and messages from friends and followed pages are mixed with ‘advertisements aimed’ at the kind of user who is reading them.

2. By prioritizing the algorithms the user’s likes, interests and previous activities, the content displayed by the News Feed is ‘relevant’ to him.

3. The race, ideology, place of residence, sex and other personal information held by Facebook serve so that its algorithms provide the user with a ‘personalized menu’ of news and announcements that reinforces their beliefs and cognitive biases (‘filter bubble’).

4. Personalization in the delivery of content sometimes produces the effect ‘the news find me’ (the ‘amazing moments’ of Taina Bucher).

5. The user does not know which publications are hidden or relegated to him and which advertising Facebook considers that he is not interested and is not going to click on it. Algorithms ‘make decisions behind the scenes’, in the background.

6. Facebook constantly ‘recommends’ the user pages to follow, people who may be interested, upcoming events, updates from friends and other content.

7. If a query is made in the search box, ‘the results will depend on the user’s profile’ and their activity on and off Facebook.

5. Results

Having understood the complexity of the algorithmic structure that underpins the News Feed and its importance to the ‘user experience’ on Facebook, now it’s time to see how these algorithm systems can relate to ‘web credibility’.

As Fogg argues in his Prominence-Interpretation Theory (Fogg, 2003), when people are in front of a device connected to the internet they carry out a mental process in two phases:

  1. The user perceives the relevance of an element on the web.
  2. Interpret the credibility of it as positive or negative.

That is, according to the author, a formula for ‘web credibility’ would be: “Credibility = Relevance + Interpretation.”

In the case of the Facebook News Feed, by arranging its algorithms what the user sees and when he does it, conditions or mediates what may or may not be relevant to him. Algorithms manipulate the ‘relevance’ factor of the previous equation because they have tagged the user in topics and keywords, they have packed it in clusters and they have quantified it (followers, Likes, interactions) according to an infinity of criteria: affinity, family members, interests, hobbies, way of spending the holidays, social status, profession, and so on. When the algorithms connect these classifications with the contents and recommendations that they send to the user, they receive important information for them. The content of the News Feed, consequently, conveys relevance but also other related concepts, such as credibility, trust or cognitive authority.

Taking into account the characteristics of the user experience or algorithmic experience on Facebook, several qualities or effects can be identified that emerge from the News Feed and that are likely to generate credibility because they coincide with classic elements or attributes of credibility between people and within the human-computer interaction on which there is broad scientific consensus:

1. ‘Relevance’: the contents are important to the user.

2. ‘Trust’: the contents come from sources trusted by the user, such as friends and family.

3. ‘Transparency’: the contents link to the original sources.

4. ‘Personalization’: the user feels well served, sees that he takes advantage of the relationship with the platform (effect ‘the news find me’).

5. ‘Affinity’: the contents reinforce the values, tastes, visions and stereotypes of the subject.

6. ‘Verisimilitude’: the contents seem genuine and that they come from sources of recognized prestige.

7. ‘Network effect’: users benefit from access to viral content, topics and trends that exceed the scope of their small social group.

Regarding the interpretation of what he sees, the Internet user can make four types of credibility assessment simultaneously, always in a specific social and moral context: ‘presupposed credibility’ (according to the values ​​and stereotypes of the subject and the social group), ‘reputed’ (external indicators of prestige such as titles or positions), ‘superficial’ (quick assessment, at first glance) and ‘acquired’ (supported by the subject’s own experience) (Fogg, y otros, 1999).

Ultimately, ‘algorithmic credibility’ would be that perceived in digital environments where algorithms decide underlyingly what is relevant to the user.

6. Discussion

On this occasion, the case of Facebook has been chosen, but the algorithmic systems of Google, YouTube, Twitter or Amazon share similar schemes: they all collect data from their users to create profiles as detailed as possible and offer them the best online experience and then sell advertising spaces focused on ‘calculated audiences’ (Gillespie, 2014 pág. 168), data to third parties or home products such as Amazon. It is a technology that, if it raises doubts about the privacy and lack of protection of users in the commercial field, generates even greater fears when it is used by governments.

The desire of algorithms to give the users what they want, are interested in and also like their friends and family produces ‘echo chambers’ that amplify the messages that ratify the beliefs of the individual while excluding other points of view and different content. This technological tendency to reduce public debate is very dangerous when the networks are full of hoaxes and misinformation, since their extension and speed of propagation make it very difficult to refute them.

The Project on Computational Propaganda at the Oxford Internet Institute studied disinformation about COVID on YouTube between October 2019 and June 2020. They found 8,105 videos that the platform had removed for containing false information, representing less than 1 percent of all YouTube videos about the coronavirus.

Surprisingly, they found that COVID-related disinformation videos got the majority of their audience not on YouTube, but on Facebook, where it was warned with labels that it was false information only in 55 videos, less than 1 percent of the videos with falsehoods shared on that platform.

In those nine months, disinformation videos were shared nearly 20 million times on social media, outpacing the combined YouTube broadcast of the top five English-language news sources: CNN, ABC News, BBC, Fox News and Al Jazeera. And YouTube took an average of 41 days to remove the videos with false information, in cases that these data were available (Knuutila, y otros, 2020).

Unfortunately, the credibility-building mechanisms of algorithmic systems serve both to spread true and false information. There is numerous scientific evidence of its propagating effect on disinformation, its contagion to the traditional media and the consequent polarization of public opinion.

7. Conclusions

From the analysis of Facebook algorithms with the help of scientific publications in the field of ‘human-computer interaction’, ‘web credibility’ and other sources, it can be deduced that, just as there is the concept of ‘user experience’, there is also an ‘algorithmic experience’ and, derived from it, an ‘algorithmic credibility’.

Algorithmic credibility would be that perceived in those digital environments where algorithms decide underlyingly what is relevant to the user.

The characteristic elements of algorithmic credibility are the following perceptions of the Internet user induced by algorithms or artificial intelligence in online digital environments: ‘relevance’ (the contents are important for the user), ‘trust’ (they come from reliable sources or are already known), ‘transparency’ (refer to the sources), ‘personalization’ (effect ‘the news find me’), ‘affinity’ (reinforces the values, tastes and beliefs of the user), ‘verisimilitude’ (the information seems verified) and ‘network effect’ (access to viral content, trends that exceed the scope of the user’s social group).

Algorithmic credibility serves in the same way to spread true and false information, its hidden weapons of seduction can be used to do good but also to cause much evil and the misinformation that runs throughout the world is good proof of this.

Bibliographic references

Alvarado O. y Waern A. Towards Algorithmic Experience: Initial Efforts for Social Media Contexts [Hacia la experiencia algorítmica: esfuerzos iniciales para los contextos de las redes sociales] [Conferencia] // CHI 2018. — Montréal, Canada : [s.n.], 2018. — https://doi.org/10.1145/3173574.3173860.

Bucher T. The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms [El imaginario algorítmico: explorando los efectos ordinarios de los algoritmos de Facebook] [Publicación periódica] // Information, Communication & Society. — 2016. — 1 : Vol. 20. — págs. 30–44. — https://doi.org/10.1080/1369118X.2016.1154086.

Christin A. Algorithms in practice: Comparing web journalism and criminal justice [Algoritmos en la práctica: Comparación del periodismo web y la justicia penal] [Publicación periódica] // Big Data & Society. — 2017. — 2 : Vol. 4. — https://doi.org/10.1177/2053951717718855.

Facebook ¿Cómo decide Facebook qué anuncios me muestra? [En línea] // Servicio de ayuda. — 2020. — 28 de octubre de 2020. — https://www.facebook.com/help/562973647153813.

Fogg B.J. Prominence-Interpretation Theory: Explaining How People Assess Credibility Online [Teoría de la interpretación de la prominencia: explicando cómo las personas evalúan la credibilidad en línea] [Conferencia] // CHI 2003: New Horizons. — Ft. Lauderdale, Florida, USA : [s.n.], 2003. — págs. 722–723. — http://credibility.stanford.edu/pdf/PITheory.pdf.

Fogg B.J. y Tseng H. The elements of computer credibility [Los elementos de la credibilidad informática] [Conferencia] // SIGCHI conference on Human Factors in Computing Systems [Conferencia SIGCHI sobre factores humanos en sistemas informáticos]. — Pittsburgh, USA : [s.n.], 1999. — págs. 80–87. — https://dl.acm.org/doi/pdf/10.1145/302979.303001.

Gillespie T. The Relevance of Algorithms [La relevancia de los algoritmos] [Sección del libro] // Media Technologies: Essays on Communication, Materiality, and Society [Tecnologías de los medios: ensayos sobre comunicación, materialidad y sociedad] / ed. T. Gillespie, P.J. Boczkowski y K.A. Foot. — [s.l.] : MIT Press, 2014. — http://culturedigitally.org/2012/11/the-relevance-of-algorithms/.

Hein M. La Guerra de Irak: Al principio fue la mentira… [En línea] // Deutsche Welle (DW). — 9 de abril de 2018. — 21 de octubre de 2020. — https://www.dw.com/es/la-guerra-de-irak-al-principio-fue-la-mentira/a-43314279.

Jacobsen B.N. Algorithms and the narration of past selves [Algoritmos y la narración de yoes pasados] [Publicación periódica] // Information, Communication & Society. — 2020. — https://doi.org/10.1080/1369118X.2020.1834603.

Joler-1 V. y Petrovski A. Immaterial Labour and Data Harvesting — Facebook Algorithmic Factory (1) [Trabajo inmaterial y recolección de datos — Fábrica algorítmica de Facebook (1)] [En línea] // Share Lab. — 21 de agosto de 2016. — https://labs.rs/en/facebook-algorithmic-factory-immaterial-labour-and-data-harvesting/.

Joler-2 V. y Petrovski A. Human Data Banks and Algorithmic Labour — Facebook Algorithmic Factory (2) [Bancos de datos humanos y trabajo algorítmico — Fábrica algorítmica de Facebook (2)] [En línea] // Share Lab. — 20 de agosto de 2016. — https://labs.rs/en/facebook-algorithmic-factory-human-data-banks-and-algorithmic-labour/.

Joler-3 V. y Petrovski A. Quantified Lives on Discount — Facebook Algorithmic Factory (3) [Vidas cuantificadas con descuento — Fábrica algorítmica de Facebook (3)] [En línea] // Share Lab. — 19 de agosto de 2016. — https://labs.rs/en/quantified-lives/.

Kayser-Bril N. Automated discrimination: Facebook uses gross stereotypes to optimize ad delivery [Discriminación automatizada: Facebook utiliza estereotipos burdos para optimizar la publicación de anuncios] [En línea] // AlgorithmWatch. — 18 de octubre de 2020. — https://algorithmwatch.org/en/story/automated-discrimination-facebook-google/.

Knuutila A. [y otros] Covid-related misinformation on YouTube: The spread of misinformation videos on social media and the effectiveness of platform policies [Información errónea relacionada con Covid en YouTube: la difusión de vídeos de información errónea en las redes…] [Informe] : Data Memo 2020.6 / ComProp. — [s.l.] : Oxford, UK, 2020. — https://comprop.oii.ox.ac.uk/research/youtube-platform-policies/.

Llamero L. Valores de credibilidad del ciberperiodismo, del contenido generado por usuarios y otros divulgadores de información [Conferencia] // V Congreso Internacional de Ciberperiodismo y Web 2.0. — Bilbao, España : [s.n.], 2013. — págs. 399–418 . — http://hdl.handle.net/10810/15609.

Lorés J., Granollers T. y Lana S. Introducción a la interacción persona-ordenador [En línea] // Issuu. — Universitat de Lleida, 2002. — http://issuu.com/pcdiabla/docs/introduccion.

McSherry C. Content Moderation and the U.S. Election: What to Ask, What to Demand [Moderación de contenido y las elecciones estadounidenses: Qué preguntar, qué exigir] [En línea] // Electronic Frontier Foundation (EFF). — 26 de octubre de 2020. — 27 de octubre de 2020. — https://www.eff.org/deeplinks/2020/10/content-moderation-and-us-election-what-ask-what-demand.

Mezei P. y Verteș-Olteanu A. Trust in the system [La confianza en el sistema] [Publicación periódica] // Internet Policy Review. — 2020. — https://doi.org/10.14763/2020.4.1511.

Papadamou K. [y otros] Pseudoscientific Content on YouTube: Assessing the Effects of Watch History on the Recommendation Algorithm [Contenido pseudocientífico en YouTube: Evaluación de los efectos del historial de reproducciones en el algoritmo de recomendación] [Publicación periódica] // ArXiv. — 2020. — https://arxiv.org/pdf/2010.11638.pdf.

Petrie H. y Bevan N. The evaluation of accessibility, usability and user experience [La evaluación de accesibilidad, usabilidad y experiencia de usuario] [Sección del libro] // The Universal Access Handbook [El manual de acceso universal] / ed. C. Stepanidis. — [s.l.] : CRC Press, 2009. — https://www.researchgate.net/profile/Helen_Petrie/publication/228538252_The_Evaluation_of_Accessibility_Usability_and_User_Experience/links/09e4150c33fc61f69d000000.pdf.

Ryan-Mosley T. Los filtros ultraespecíficos de Trump y Biden para seducir en Facebook [En línea] // MIT Technology Review. — 30 de octubre de 2020. — 30 de octubre de 2020. — https://www.technologyreview.es/s/12791/los-filtros-ultraespecificos-de-trump-y-biden-para-seducir-en-facebook.

Serrano S. Recursos sobre usabilidad y experiencia del usuario que deberías tener en cuenta [En línea] // Hiberus. — 13 de enero de 2018. — https://www.hiberus.com/crecemos-contigo/recursos-usabilidad-experiencia-del-usuario/.

Sharpe E. For a healthy democracy, Facebook must halt micro-targeted political ads [Para una democracia saludable, Facebook debe detener los anuncios políticos microespecificados] [En línea] // Web Foundation. — 15 de enero de 2020. — https://webfoundation.org/2020/01/for-a-healthy-democracy-facebook-must-halt-micro-targeted-political-ads/.

Sundar S.S. Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII) [El aumento de la mediación automatizada: un marco para estudiar la psicología de la interacción humano-IA (HAII)] [Publicación periódica] // Journal of Computer-Mediated Communication. — 2020. — 1 : Vol. 25. — págs. 74–88. — https://doi.org/10.1093/jcmc/zmz026.

The Markup The Citizen Browser Project — Auditing the Algorithms of Disinformation [The Citizen Browser Project — Auditoría de los algoritmos de desinformación] [En línea] // The Markup. — 16 de octubre de 2020. — 21 de octubre de 2020. - https://themarkup.org/citizen-browser/.

Wolf C. T. Algorithmic Living: A Practice-based Approach to Studying Algorithmic Systems [Vida algorítmica: un enfoque basado en la práctica para estudiar sistemas algorítmicos] [Informe] / Almaden Research Center. — [s.l.] : IBM Research, 2016. — https://bitlab.cas.msu.edu/trustworthy-algorithms/whitepapers/Christine%20Wolf.pdf.

--

--