Annette Bernhardt, UC Berkeley
Imagine you’re applying for a job via video, and without telling you the company uses software that analyzes your eye contact, facial expressions, and tone of voice to predict whether you’re a good match for the job. Or imagine that you work in an Amazon warehouse and an algorithm fires you for not meeting productivity quotas. Or say your employer is using a system to predict whether you will quit or become pregnant, denying you a promotion.
While these scenarios may sound dystopian, each is a real-life example. If they happened to you, what rights would you want? What recourse should you have? And who would you go to for help?
Up until last week, our legal system had very few answers for you. Employers across the economy are using digital technologies to hire, manage, and monitor workers, but with virtually no regulation. That has left the door open for harms such as race and gender discrimination, work speed-up, deskilling and automation, growth in gig work, loss of autonomy and privacy, and suppression of worker organizing.
But on January 1, California took a first step towards worker data rights when new amendments to the California Consumer Privacy Act (CCPA) went into effect and covered workers at large businesses.
This marks the first time that workers in the U.S. will have the right to know when employers are surveilling them, and for what purpose. They will be able to access their data, and ask to correct or delete it. And they will be able to opt-out of employers selling their data.
Workers and policymakers will finally get a look inside the black box of obscure workplace technologies, which is essential for figuring out response strategies. For example, truck drivers in Seattle conducted a one-day strike after finding out about highly invasive cameras in their cabs that used facial recognition and tracked their eye movements, winning the right to cover the cameras for privacy.
Transparency and disclosure alone, however, are not enough. Like the broader policy shift to regulate consumer data, workers deserve a full set of rights and protections around new technologies in their workplaces. Here are some key policy principles:
-
Guardrails on how employers use digital technologies. Employers should only use electronic monitoring and algorithmic management for narrow business purposes, without harm to workers. In particular, the unreliability of these systems means they shouldn’t be used to automate decisions like hiring, firing, and discipline. Employers also should not use high-risk or unproven technologies like facial or emotion recognition.
-
Heightened scrutiny for productivity management systems. There is already mounting evidence that productivity monitoring systems in industries such as warehousing and call centers lead to injuries and other negative health impacts. Use of these systems should be subject to strong health and safety regulation.
-
Prohibitions on discrimination. Data-driven technologies such as hiring software should not discriminate against workers based on race, gender, disability, and other protected characteristics. As another guard against bias, employers should be prohibited from using predictions about a worker’s traits and behaviors that are unrelated to their job responsibilities.
-
Right to organize. Workers should have the right to bargain over employers’ use of data-driven technologies. And as recently affirmed by the NLRB, employers should not use digital technologies or social media monitoring to identify, monitor, or punish workers for organizing.
-
Holding employers responsible for harms. Regulatory agencies should play a central role in enforcing technology standards, and workers should be able to sue employers for violations. But we should not wait for harms to occur: employers should be required to conduct impact assessments prior to using new technologies.
These are not outlandish policies. Many are already law in other countries and have informed California’s first-in-kind law regulating warehouse productivity quotas, as well as the proposed Workplace Technology Accountability Act, introduced last year by California Assemblymember Ash Kalra and supported by unions and privacy rights advocates.
The workplace is rapidly becoming a major site for the deployment of AI-based technologies; it is high time that our laws and regulations catch up. Coverage by the CCPA is just the first step to ensure that California workers have the tools necessary to advocate for their rights in the 21st century data-driven workplace.