Monitoring & surveillance technologies shift power dynamics in the workplace

In a work context, surveillance and data collection raise issues that go beyond privacy concerns based on individual rights

Aiha Nguyen
Data & Society: Points
4 min readMar 5, 2019

--

Image via Flickr–Mario Klingemann

This blog post draws on insights from the Data & Society explainer Workplace Monitoring & Surveillance, co-written by Researcher Alexandra Mateescu and Labor Engagement Lead Aiha Nguyen.

Whether it’s the use of closed circuit televisions or keycard access to track movement, expectations of privacy are often left at the door when an employee enters the workplace. New technologies are enabling greater and more pervasive forms of monitoring and surveillance, resulting in new challenges for workers. Public debate both in the United States and in Europe have led to recent calls for greater consumer rights over the collection and use of their data. In a work context, however, surveillance and data collection raise issues that go beyond privacy concerns based on individual rights.

Employers surveil workers for various reasons and can use the same technologies for both beneficial and extractive ends. Monitoring tools may serve purposes such as protecting assets and trade secrets, controlling costs, enforcing protocols, increasing work efficiency, or guarding against legal liability. New technologies that couple monitoring tools with granular data collection now allow employers to use these systems to exert greater control over large workforces, rapidly experiment with workflows, detect deviant behavior, evaluate performance, and automate tasks.

New technologies are enabling greater and more pervasive forms of monitoring and surveillance, resulting in new challenges for workers.

These technologies can be broadly grouped into three categories: predicting and flagging, remote monitoring and time tracking, and biometric and health data monitoring.

Management may use predicting and flagging tools to monitor employee behavior, helping them identify employee characteristics or predict an employee’s future performance. For example, in service industries like retail or food, employers use flagging systems in point of sale metrics to generate performance reports, which produce both aggregate and individual data. This data includes “exception-based reporting,” which singles out workers whose data exhibits unusual patterns, such as processing a higher than average number of customer returns. In effect, workers are treated as suspects based on proxies and metrics that are machine-readable, but might not tell the whole picture.

In other instances, predictions made by such systems are tenuous. Predictim, an online service which provides vetting for domestic services, claims to use “advanced artificial intelligence” to analyze a job candidate’s personality by scanning their social media posts. The service then generates a profile that lists identified traits like “bad attitude.” The use of proxies like sentiment analysis can open channels for bias, and data that purports to correlate social media behavior with someone’s ability to do a job can be problematic.

Remote monitoring and time tracking through GPS-location, computer monitoring software, app-based activity trackers, and remote sensors allow managers or clients to manage large groups of workers indirectly. Many workers on platforms are classified as independent contractors despite the company having significant control over worker actions. Gig platforms like Handy.com and Uber, for example, use apps to decentralize their control of worker activities, but still collect detailed data about trips, communications, and pay. This information can allow companies to nudge workers in ways that advantage the company, but not necessarily the worker (such as directing workers to perform a poorly compensated task that they might not accept if given more information). Recently, Instacart came under scrutiny for using tips that drivers receive in order to supplement pay when they didn’t earn enough to meet the minimum wage. While the company has since changed its policy, others like DoorDash and AmazonFlex continue to engage in this practice. Companies are able to do this because they have detailed information about worker earnings.

Finally, the collection of biometric and health data through wearables, fitness tracking apps, and biometric timekeeping systems are newer forms of workplace monitoring. These programs are often part of employer-provided health care programs, wellness programs, and digitally tracked work shifts. Employers, like BP America, are adopting these devices for their employees in a bid to improve employee health habits while simultaneously persuading insurance companies to reduce insurance rates at significant savings to the company. Additionally, fitness apps and wearables usually follow employees out of the office, bringing workplace privacy concerns into their private lives.

Facial recognition tools or fingerprint scanners are likewise becoming increasingly common but pose privacy challenges. In the U.S., more than 50 companies have faced lawsuits over the collection of employee fingerprint data through biometric timekeeping tools. Concerns arise over employees’ ability to opt out of such programs if doing so means financial penalties or being deemed riskier by their employer.

As technology expands the scope and scale of what can be done with surveillance tools, workplace protections must also evolve.

Workers and advocates are challenging the power imbalances these tools generate, as well as their accuracy and fairness on a technical level. However, monitoring tools that can be used to make decisions about a worker’s compensation, perceived risk, or even employment are hard to contest. In some cases, employees may not know they are being surveilled or that data is being collected. The large amounts of data collected about workers is interpreted, analyzed, and repurposed, yet there is no clear means of ensuring that the data accurately reflects the situation. Also, this raises questions about who retains this data and how they can use it.

These challenges point to a need for a broader framework to balance the economic interests of companies and the personal and economic interests of workers. As technology expands the scope and scale of what can be done with surveillance tools, workplace protections must also evolve to address the collective information asymmetries, biases, accuracy of proxies, and shifts in power that are occurring.

Aiha Nguyen is the labor engagement lead at Data & Society. This blog post is cross-posted on the London School of Economics Business Review.

--

--

Aiha Nguyen
Data & Society: Points

Program Director for the Labor Futures Initiative at Data & Society. Research interests lie at the intersection of labor, technology, and urban studies