The Yale Law Journal

VOLUME
134
2024-2025
Forum

Data Laws at Work

31 Jan 2025

abstract. In recognition of the material, physical, and psychological harms arising from the growing use of automated monitoring and decision-making systems for labor control, jurisdictions around the world are considering new digital-rights protections for workers. Unsurprisingly, legislatures frequently turn to the European Union (EU) for inspiration. The EU, through the passage of the General Data Protection Regulation in 2016, the Artificial Intelligence Act in 2024, and the Platform Work Directive in 2024, has positioned itself as the leader in digital rights, and, in particular, in providing affirmative digital rights for workers whose labor is mediated by “a platform.” However, little is known about the efficacy of these laws.

This Essay begins to fill this knowledge gap. Through close analyses of the laws and successful strategic litigation by platform workers under these laws, I argue that the current EU framework contains two significant shortcomings. First, the laws primarily position workers as liberal, autonomous subjects, and in doing so, they make a category error: workers, unlike consumers, are subordinated by law and doctrine to the firms for which they labor. As a result, the liberal rights that these laws privilege—such as transparency and consent—are insufficient to mitigate the material harms produced through automated labor management. Second, this Essay argues that by leaning primarily on transparency principles to detect, prevent, and stop violations of labor and employment law, EU data laws do not account for the ways in which workplace algorithmic management systems often create new harms that existing laws of work do not address. These harms, which fundamentally disrupt norms about worker pay, evaluation, and termination, arise from the relational logic of data-processing systems—that is, the way that these systems evaluate workers by dynamically comparing them to others, rather than by evaluating them objectively based on fulfillment of ascribed duties. Based on these analyses, I propose that future data laws should be modeled on older approaches to workplace regulation: rather than merely seeking to elucidate or assess problematic data processes, they should aim to restrict these processes. The normative north star of these laws should be proscribing the digital practices that cause the harms, rather than merely shining a light on their existence.

Introduction

Despite widespread legal concerns about the technology industry’s surveillance of consumers,1 the most intrusive and far-reaching digital technologies for monitoring and controlling human behavior do not target people when they make or contemplate purchases. They target people at work. In many jobs and sectors, particularly low-wage ones, digital workplace technologies execute novel forms of labor control. In some cases, they even replace human managers, whose social and technical knowledge about a job, the workplace, and a particular worker might otherwise be used to make hiring decisions, determine quotas, allocate work, decide pay, evaluate performance, and make disciplinary or termination decisions.2

A growing number of workers, including so-called “gig” and “platform” workers (broadly defined as workers who are completely managed through smartphone applications), are now hired, evaluated, paid, disciplined, and terminated through automated systems, with little to no meaningful human oversight or intervention.3 Because platform companies often treat their workers as self-employed contractors who are not afforded the protection of established employment and labor laws, these firms have been uniquely positioned to experiment with remote algorithmic control and pioneer new forms of digitalized workforce management.4 Platform work, in this sense, has been a canary in the coal mine. Innovative systems of automated worker control, which originated in the platform context, have since been imported to other employment sites—including in the transportation, delivery, warehousing, hospitality, janitorial, healthcare, computer-science, and education sectors.5

These new systems of workforce management can be divided into two broad categories: automated monitoring systems (AMSs) and automated (and augmented) decision-making systems (ADSs).6 AMSs collect a wide array of personal data from workers both on and off the job, including data on speed, movement, and behavior, and then feed that data into ADSs to carry out or support a broad range of tasks, such as determining work allocation, communicating with a worker (via a chatbot), or evaluating workplace performance. ADSs (or offline procedures that heavily rely on ADSs) are also sometimes used to perform the most central functions of the employer: to determine whether to hire a worker, how much to pay them, when to discipline or reward them, and critically, when to terminate them.7

Proponents of the digitalization of labor management—including artificial intelligence (AI) companies, data brokers, employers, and some scholars8—argue that digital labor-management systems bring machine objectivity into the workplace via digital on-the-job surveillance and control, thus bettering the lives of workers by purportedly increasing scheduling flexibility and correcting for longstanding gendered and racial wage differentials.9 They also assert that these systems improve firm accuracy and efficiency while enhancing worker satisfaction.10

To be sure, together with appropriate legal safeguards and prohibitions, digital technology could be designed to help employers and workers achieve more fair, equitable, free, and democratic workplaces. To date, however, findings from sociotechnical research11 and the cultivated expertise of workers cast doubt on the purported positive impacts of existing systems. An emergent body of empirical research on workers who are digitally managed—including research on platform workers in the logistics and transportation industries—raises serious alarms about the social, economic, psychological, and physiological harms imposed by extant forms of AMSs and ADSs.12 Many of these harms can be understood as intensifying familiar problems. For example, research suggests that since datasets embody preexisting biases, the automated systems that rely on such data may replicate historical forms of discrimination in hiring and pay.13 Investigations have also found that as with human oversight and evaluation, machine errors are not uncommon, but they are hard to detect and correct, resulting in erroneous, unfair evaluations and terminations with no avenue for redress.14 Other studies observe that algorithmically determined quota systems can push workers to work too hard and too quickly, resulting in serious bodily injury and offsetting the last century of occupational health and safety interventions.15

By and large, these researchers suggest that the intensified workplace harms caused by the introduction of AMSs and ADSs are the result of “information asymmetries” between workers and their employers.16 Advanced AMSs invisibly enable employers to collect detailed data about workers, their movements, and their behaviors.17 This data is then fed into ADSs—including machine-learning systems—which generate black-box rules to govern the workplace.18 Scholars tend to assume that if workers had access to the data that is collected on them, along with knowledge of how it is used by ADSs, then they could use traditional legal avenues (for example, litigation, consultation, and collective bargaining) to challenge machine-generated mistakes and biases through the existing laws of work, just as they can challenge human-generated mistakes and biases.19 Likewise, existing scholarship tends to assume that if workers knew and understood the algorithmic rules that govern their workplaces, they could spot and correct violations of prevailing labor and employment laws, which already protect against unsafe workplaces, identity-based discrimination, low pay, and—applicable to the European Union (EU), but not to private, nonunionized workplaces in the United States—”unjust” terminations.20

Building on this research, the first wave of legislation to address the problems arising from digitalized labor control focuses almost exclusively on information transparency rights and mandates, including data access, data-processing explainability, and impact assessments. The undisputed legislative leader has been the EU. In 2018, the EU passed the first omnibus law to accord data rights to natural persons, the General Data Protection Regulation (GDPR), which has since been replicated in many jurisdictions around the globe, including in some U.S. states—most consequentially in California.21

Drafted primarily with consumers in mind, the GDPR also applies to workers, though comparably few have mobilized to exercise their rights under the law. More recently, in 2024, many of the rights embodied in the GDPR—including data-access rights, data-processing explainability rights, and impact assessments—were specifically mandated for platform work in the EU via the Platform Work Directive (PWD). The PWD also includes novel rights that are intended to directly address ADSs. For instance, the directive forbids platform firms from processing data on emotional, psychological, and personal beliefs, thus granting platform workers greater data-processing protections than any other workers in the EU.22 Also in 2024, the EU passed the Artificial Intelligence Act (AI Act), which labels the workplace a high-risk setting, a designation that triggers predeployment and postmarket safeguards for employment-related AI.23

Together, the GDPR and the AI Act create, for the first time ever, a web of critically important—if experimental—data and data-processing rights for the work context. The PWD then builds on these rights to extend even more data protections to a subset of workers—platform workers—who are almost exclusively managed by digital machinery. As the European Commission considers the possibility of an algorithmic-management directive that would extend the rights created through the PWD to other workforces, and as jurisdictions around the world consider laws and regulations to emulate the EU legislation, determining the efficacy of these first-wave interventions is critical. At the time of writing, however, we still know very little about how adequately these new rights address the significant harms and problems posed by on-the-job use of AMSs and ADSs.24

This Essay begins to fill this gap by offering a close study of these laws, along with an analysis of a recent natural legal experiment: pioneering litigation by platform workers who exercised their data and data-processing rights under the GDPR and won access to information about termination and pay. Ride-hail workers in the EU, supported by the nongovernmental organization (NGO) Worker Info Exchange (WIE), the App Drivers and Couriers Union (ADCU), and privacy advocates, were among the first to successfully challenge a platform firm’s refusal to release, in some cases, any data at all, and in others, only limited and insufficient data and data-processing information.25 However, in an unexpected twist, the success of this litigation proves the insufficiency of current regulation.26 While the years-long litigation led to monumental and precedent-setting judgements against ride-hail companies Uber and Ola, workers have been unable to leverage the litigation wins—and the data transparency and explanations achieved through these wins—to effect meaningful, systematic harm reduction.27

Through a critical analysis of this strategic litigation and the laws underpinning the litigation, this Essay argues that the first wave of data and data-processing rights for workers does not effectively address the harms arising from algorithmic management because it makes two conceptual errors. First, the laws treat workers as liberal, autonomous subjects. But by law, when people are at work, they are not free to behave autonomously. Rather, the law formally subordinates them to the firms for which they labor.28 Arguably, then, workers’ primary interests lie not in transparency, privacy, and consent, but in job certainty, wage security, and dignity.29 Moreover, given the explicit legal domination afforded to employers in the workplace, laws that place the burden on workers to access and understand data-processing systems, and then to use this knowledge to circumvent present and future harms, are of limited practical utility. Low-wage workers generally lack the resources, power, and technical insight to know when their employers are not adequately complying with their obligations under data laws.

Second, by leaning primarily on transparency principles to detect, prevent, and stop violations of labor and employment laws, the GDPR, the PWD, and the AI Act do not account for the ways in which workplace algorithmic-management systems often create new harms that existing laws of work do not address. These harms, which fundamentally disrupt norms about worker pay, evaluation, and termination, arise from the relational logic of data-processing systems. A worker managed through or with the assistance of ADSs may not be rewarded or disciplined based on an evaluation of their individual rule compliance, productivity, and effort.30 Rather, their intended behavioral modifications may be contextual and iterative, with variable outcomes, expectations or results based on how AMSs and ADSs understand and position them in relation to their coworkers in general and at any given time.31 As these data-processing laws are amended and expanded in the EU and as they are considered for replication around the world—including in California and other U.S. states—legislators, workers, and worker representatives should attend to the new harms of algorithmic management and address the shortcomings of existing data laws.

This Essay proceeds in three Parts. Part I analyzes the GDPR, the AI Act, and the PWD specifically as laws of work and examines their principal approaches to data and data-processing rights—notice, transparency, and impact assessments—in relation to the pressing problems and precarities produced through automated labor control. Part II then positions these data laws in relation to the broader law and political economy of the workplace and argues that they do not account for workers’ positionality as “illiberal” subjects—forbidden, by legal doctrine, from behaving in ways that are at odds with the business interests of their employers. Finally, Part III analyzes a natural experiment to extract lessons for future regulation of automated labor control. In particular, it examines the case study of Uber and Ola ride-hail workers who mobilized to vindicate their rights as data subjects under the GDPR in an attempt to address problems caused by ADSs related to pay and termination. The Essay concludes by recommending a guiding principle for future data laws, one that reflects older approaches to workplace regulation: regulation must move beyond merely elucidating and assessing data processes and shift more pointedly towards restricting the use of such data and processes where the systems cause harmful workplace outcomes.

I. the first wave of data rights for workers: the eu context

Despite the overarching data-minimization goals embedded in the GDPR,32 digital data collection and data processing in the workplace have grown dramatically in reach and sophistication since the law’s passage in 2016. From 2019 to 2022, coinciding with pandemic stay-at-home orders and new work-from-home policies, global demand for worker-monitoring software reportedly increased by sixty-five percent.33 Across service sites and product supply chains, this intensified digital monitoring was coupled with the development of sophisticated automated decision-making software, which businesses deployed to make management decisions more rapidly, to increase production or service speed and scale, and to lower labor overhead.34

Firms that self-identify as “platforms”35 and use what scholars have called a “platform management model”36 were among the first to experiment with what is now called “algorithmic management”—the automation of work processes and management functions, including coordination and control of a workforce, often via machine-learning systems.37 But the techniques of digitalized workplace surveillance and algorithmic management first observed in “platform work” were quickly adopted by firms with more traditional employment models.38 Accordingly, extant research on platform work is particularly useful for understanding trends in algorithmic management across the labor market.

Two particularly significant forms of algorithmic management, which this Essay uses to ground its analyses of existing data laws, are the uses of ADSs (1) to set wages (sometimes through the allocation of work or wage products) and (2) to evaluate and terminate workers. Through automated wage-setting practices, known in the platform-work literature as algorithmic wage discrimination, firms use social data39—including data extracted from workers’ labor—to “personalize and differentiate wages for workers in ways unknown to them, paying them to behave in ways that the firm desires, perhaps for as little as the system determines that the workers may be willing to accept.”40 While algorithmic wage discrimination—the transference of consumer price discrimination to the work context—was first documented in on-demand work, traditional employers have also commenced using machine-learning software to “tailor each employee’s compensation” in ways that remain opaque to the workforce.41 Similarly, “deactivation,” a euphemism for termination engineered by on-demand firms, has traveled to more traditional employment settings in which automated decision-making software is now used to invisibly and opaquely evaluate and dismiss workers, even in just-cause jurisdictions.42

Both automated wage-setting and automated evaluation/termination systems create novel harms and new logics of labor control, often allowing firms to hew to the letter of existing employment laws while evading their spirit. For example, in low-wage sectors, hourly wages are conventionally transparent to individual workers, certain, and set by individual or collective contracts. Though performance-based variable pay using offline evaluation processes and bonus structures is not uncommon, wage discretion is limited by laws that protect workers from discrimination based on protected identities and those that create minimum-wage and overtime-wage floors.43 Variable pay and discipline practices, even in the at-will employment context, typically operate through norms and logics that associate hard work, rule-following, and worker loyalty with higher pay and work security.44 But the novel logics of some data-processing systems, discussed further in Part II, disrupt these norms and introduce new experiences of uncertainty to the workplace, thereby unsettling the relationship between work and economic security.

Just as concerns about data and data-processing in the consumer context have largely focused on safeguarding individual data privacy and consent, concerns about data and data-processing in the workplace have focused centrally on transparency, to the detriment of other principles like fairness and economic security.45 According to the prevailing view among analysts, from which this Essay departs, the central problem with algorithmic management is that workers governed by such systems lack knowledge about the basic rules they must follow. In contrast to labor process customs of nondigital, offline scientific management, in which workers are typically informed of workplace expectations,46 workers are left to wonder: How are their wages determined? In what ways are they being evaluated and by what metrics? What is the world of behaviors that might lead to discipline or termination? Knowing what data is being extracted and understanding the logic behind the ADSs, observers argue, would enable workers to adjust to the digital labor processes and to address violations of existing labor laws. Following this reasoning, legislative authorities in a few jurisdictions, including in some U.S. states and in the EU, have moved to create transparency rights for workers or to extend existing data-transparency rights to the workplace.

In the following Sections, I examine the most prominent of these data laws in the EU—specifically, laws embodied in the GDPR, the AI Act, and the PWD—and analyze how they attempt to address the problems raised by algorithmic labor control. I focus on these laws because they, and in particular the GDPR, have become global models for workers’ data- and digital-protection laws.47 For example, the California Privacy Rights Act (CPRA), which is the most expansive and developed data-rights law for workers in the United States, is explicitly modelled on the GDPR. The EU, meanwhile, may soon consider adopting another algorithmic-management directive modeled after the PWD but applicable to all workers.

A. The General Data Protection Regulation (2016)

The GDPR, the first broadscale law governing data privacy for “natural persons,” went into effect in May 2018 and imposes “obligations onto organizations anywhere [in the world], so long as they target or collect data related to people in the EU.”48 In practice, the GDPR creates regulations “on the usage, storage and movement of data.”49 While the GDPR’s emphasis on making data usage explainable to natural persons is primarily aimed at allowing consumers to make informed decisions about the data collection and data processing to which they consent,50 these obligations can also be leveraged by workers who, by law, have very few privacy rights in the workplace. Even though “opting out” or refusing to consent to a data-processing system at work is effectively impossible without exiting a job, the GDPR provisions could, observers argue, at least help workers to understand how they are monitored and managed.51

The GDPR is a regulation, not a directive, which means that except in very specific instances, EU member states were required to adopt it into national law without changes.52 However, member states were allowed to modify how the law applied to employment, a formal recognition of the distinctive nature of work.53 Article 88,which governs data-processing rights in employment, gives significant leeway to each member state to adopt their own laws with regard to the “data subject’s human dignity, legitimate interests and fundamental rights, with particular regard to the transparency of processing [and] the transfer of personal data.”54 Member states developed a patchwork of data-processing laws in response to Article 88, with varying degrees of protection for workers,55 though these laws all reflect the GDPR’s general approach to workers’ data rights as articulated in Recital 4, which is to find a balance between an employer’s right to monitor their employees in the workplace and the employee’s right to privacy in the workplace.56 On its face, this approach pits the ideal of worker “consent”—once informed about data collection and data-processing, workers are free to exit the job—against the employers’ “legitimate interests.” It also neglects other worker interests, including economic security, with the unstated assumption that those interests are adequately addressed through the existing laws of work, including minimum-wage and just-cause regulations. However, as developed in Part II, given the legal deference to the managerial or employer prerogative, “consent” to workplace monitoring provides only a facade of privacy protections for workers who must work to live.

To date, the primary rights under the GDPR that have been utilized by workers and their representatives to gain transparency over data collection and automated decision-making systems are outlined in Articles 15, 20, and 22. On their face, these Articles allow workers to obtain their data and to understand the logic of the data-processing rules that algorithmically control them. However, even though personal data collected by employers are essentially valueless to workers in the absence of insight into why they are being collected and how they are being used,57 some employers have taken the position that the release of firm logics undercuts the competitive advantages created through algorithmic labor control.58 Consequently, while employers have been more forthcoming in releasing (at least some) personal data, they have been more reticent to release the logic of their data-processing systems.59

Nevertheless, the GDPR does mandate this kind of logic transparency.60 Articles 15 and 22, most critically, give workers the right to know the rules of the workplace—to understand the automated systems that are used to evaluate their labor, determine their wages, discipline them, and terminate their employment—and to contest the misapplication of these rules.61 Article 15 guarantees natural persons, including workers, the right to be informed about the existence of automated decision-making and to be provided with meaningful information about the logic by which these systems process their data.62 As a complement to this transparency mandate, Article 22 effectively provides workers with the right to have a “human in the loop” when decisions being made have legal or significant effects.63 The plain text of Article 22 mandates that while firms can rely on evaluations from ADSs to make workplace decisions—like terminations—that have significant effects on workers, they cannot rely solely on those systems.64

Article 20, meanwhile, gives workers the right to receive the personal data concerning themselves and the right to data portability. Article 12 requires such data to be provided in a “concise, transparent, intelligible and easily accessible form, using clear and plain language, in particular for any information addressed specifically to a child.”65 However, though many workers have requested their data under Article 20, the data they receive is often practically meaningless to them without further processing or visualization, and advocates argue that the companies “frequently omit the data categories most conducive and necessary for interrogating the conditions of work.”66 Given the obfuscating nature of digital systems, it is nearly impossible for workers (and regulators) to know whether the information requested has been properly made available. For example, in 2019, Uber provided telematic data in response to data-subject access requests, but they stopped doing so in 2020 and 2021.67 Workers who sought this data were left to wonder whether Uber had stopped collecting this safety data, or whether they just refused to release it to drivers for inspection.68 Without a full-scale public auditing of Uber’s systems, it is impossible to know.

Beyond the enumerated rights listed in Articles 15, 20, and 22, Article 35 of the GDPR contains another important safeguard against excessive monitoring of natural persons.69 The Article mandates that firms acting as data controllers carry out Data Protection Impact Assessments (DPIA) prior to processing personal data, if the processing is “likely to result in a high risk to the rights and freedoms of natural persons.”70 In the case of employment, however, this requirement has had little bite: though ADSs that process personal data often pose such consequential risks to workers, rarely are such impact assessments carried out or made public. One reason may be that firms narrowly interpret “personal data” to exclude “de-personalized” banded or grouped data derived from personal data.71 For example, a firm like Uber might repurpose personal data related to how often a worker rejects a ride to train machine-learning systems on what rides to allocate to that worker and when. But the ADSs that allocates the work might be using banded data, in which that worker is included in a subset of similarly behaving workers. Thus, a firm may decide that since only data derived from personal data is used to train the machine-learning system, a DPIA is not required for that system.72 Another limitation of Article 35 is the lack of guidance on what constitutes an adequate assessment. As Jacob Metcalf, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish have written, “What counts as an adequate assessment, when that assessment happens, and how stakeholders are made accountable to each other are contested outcomes shaped by fraught power relationships.”73 This is a particularly salient concern for the workplace.

Since the implementation of the GDPR, many of the rights enumerated by these Articles have been undermined in practice. In some cases, firms have released the data to workers in non-machine-readable formats, making it impossible to analyze even when workers partner with data analysts.74 In other cases, definitional ambiguities have prevented workers from gaining the insights that they need.75 Companies have also frequently argued that releasing the data-processing logic is tantamount to releasing “trade secrets,” or that doing so would harm the security of others.76 In the absence of affirmative litigation—which requires substantial resources that most workers lack and puts workers at risk of retaliation—workers who dare exercise their rights must accept whatever data firms provide to them.

TABLE 1. summary of key data rights afforded to workers under the gdpr

B. The Artificial Intelligence Act (2024)

The AI Act, at the time of writing, is the newest of the European laws to safeguard against the potential impacts of AI systems.77 The Act follows a “risk-based approach,” reinforces GDPR data rights, and creates some new transparency and assessment mandates for the use of AI at work.78 In contrast to the GDPR, which places the burden on the worker to invoke their “right to know”79 when automated decision-making systems are being used, the AI Act directs employers to inform workers and workers’ representatives affirmatively that they are subject to these AI systems.80 But this affirmative duty does not include any requirement to explain the workplace rules or systems logics that are embedded in the AI, thus leaving workers in the dark about how their pay is determined, how they are evaluated, when they might be disciplined or terminated, and other consequential impacts of these systems. Together with the exercise of rights in Articles 15 and 22 of the GDPR, the knowledge that an employer is using AI systems may be useful during collective bargaining, but for the roughly seventy-seven percent of nonunionized workers across the EU member states, the notification by itself does little to curb any subsequent harm.81 Again, the underlying principle of this provision is one of consent: once a worker is informed of the use of the AI system, they are free to exit the job; if they stay, they are acquiescing to being subject to and managed by AI. For many low-wage, economically precarious workers, however, the exit option is illusory, and it becomes ever more limited as workplaces increasingly utilize machine-learning systems for labor management.

More promisingly, the Preamble of the AI Act outright bans the production and use of AI that emotionally manipulates people

to engage in unwanted behaviours, or to deceive them by nudging them into decisions in a way that subverts and impairs their autonomy, decision-making, and free choices . . . whereby significant harms, in particular having sufficiently important adverse impacts on . . . financial interests are likely to occur.82

The application of this prohibition to the employment context remains unclear. This prohibition could be interpreted to ban some of the interactive systems that on-demand algorithmic-management companies use to allocate work and determine pay.83 For example, if firms treat their workforce as self-employed (a problem addressed by the PWD84), then perhaps AI systems used to nudge workers to accept work that they would not otherwise accept and to prod them to move to places they would not otherwise move may be affirmatively prohibited.85 But in the context of legally recognized formal employment, such systems produced by the employer would likely be protected by the managerial prerogative.86 In those contexts, the AI would likely be treated as high-risk but not prohibited entirely.87

Indeed, the AI Act considers the use of most AI in the employment context to be unambiguously high-risk, an implicit recognition of the economic dependency on employment for survival and of the doctrinal implications of the managerial prerogative.88 The Act divides firms into “providers” and “deployers.”89 Employers who purchase AI to use on their workforce—the deployers—have limited obligations under the Act. Most of the regulatory onus falls on the providers of AI. Specifically, in recognition of the iterative and changing nature of machine-learning systems, the AI Act mandates that providers of AI that is developed for hiring, performance, management, and monitoring—including software that sets wages, evaluates, and disciplines workers—must develop a risk-management system by August 2026, when the regulation comes into force.90 This system must include testing mandates91 that follow a product through its life cycle, including in its post-market phase when the product is purchased and used by a deployer (the system is thus reliant on compliance by deployers with monitoring and reporting obligations).92 Providers must specifically examine how the system is “likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under [EU] law.”93

Responsibility for evaluation, recordkeeping, testing, and risk assessment likewise falls primarily on the provider, not on the deployer or on an unbiased, public third party.94 Instead of directly mandating public assessments of these systems at the deployment level, as would be ideal, the Act requires self-regulation by the firms that create the machine-learning systems, who are required to maintain human oversight and monitoring for specific issues—most relevant here, violations of the EU’s Fundamental Rights and the health and safety of workers.95 But the Act provides no guideline for evaluating harms related to the workplace. How is a provider to test for “health and safety” impacts? What are the criteria to evaluate a system that creates low and unpredictable wages in relation to worker health and safety? Does the emotional distress caused by an AI system that invisibly evaluates workers make the system “unsafe”? These are questions that remain unanswered. As with the GDPR, the lack of clear guidelines around harm and fairness calls into question the efficacy of these life-cycle assessments, even if they are carefully and inclusively conducted.96

C. The Platform Work Directive (2024)

While the GDPR and the AI Act offer rights to workers of all stripes, the PWD explicitly emphasizes that the rights it enumerates apply only to platform workers, who are granted more expansive data and data-processing rights than any other workers in the EU.97 “Platform work” is defined narrowly as “a form of employment in which organizations or individuals use an online platform to access other organizations or individuals to solve specific problems, or to provide specific services in exchange for payment.”98 At the time of writing, though the PWD has passed the EU Parliament, it has not been put into effect by member states.99 Thus, the analysis in this Section is speculative; nevertheless, this directive is particularly useful to evaluate because, compared to the GDPR and the AI Act, the PWD provides broader and arguably more-effective rights to a specific subset of workers who are subject to ADSs and AMSs.100 Unlike the two previously discussed bodies of legislation, the PWD was written with platform workers in mind and more expansively addresses the problems they face.101

Specifically, the PWD offers “more specific safeguards concerning the processing of personal data by means of automated systems in the context of platform work” and recognizes that “the consent of persons performing platform work to the processing of their personal data cannot be assumed to be freely given.”102 Unlike both the GDPR and the AI Act, the PWD reaches beyond transparency, consent, and impact assessments to affirmatively prohibit the use of certain processing of personal data relating to the individual’s body, mental state, protected identity, or personal beliefs.103 These are not full-scale prohibitions, however. For instance, the PWD may permit automated processing if the data is depersonalized through banding, a loophole that could affect groups of workers exercising their fundamental rights, including their freedom of association.104 Moreover, while it bans the processing of biometric data, it allows “biometric verification” such as the use of facial recognition technologies to identify workers, even though such systems have a higher false-positive rate for people of color and can lead to unfair termination.105

The PWD may also fail to attend to the structural realities of digital control. Critically, the PWD does not affirmatively prohibit automated decision-making in contexts related to hiring, pay determination, work allocation, discipline, and termination.106 Instead, it extends the rights embedded in Article 35 of the GDPR to the context of platform work by mandating that firms carry out impact assessments before new ADSs are deployed.107 Such firms must “carry out a data-protection impact assessment” to evaluate the impact of ADSs’ processing of personal data on the rights and freedoms of persons performing platform work.108 The firms’ assessment must be carried out every two years and shared with workers and workers’ representatives.109 One problem with this approach, however, is that by allocating the responsibility for this evaluation to the firms themselves (as opposed to mandating a public audit), the PWD, like the AI Act, neglects the enforcement problems that arise with black-box systems. Given the competitive incentives for firms to maintain secrecy around these systems, how does a worker or workers’ representative know that the impact assessment includes all the AMSs and ADSs that the firm deploys?

A second and more significant problem is that like the GDPR, the PWD fails to lay out meaningful standards or criteria for the impact evaluations of the ADSs or affirmative steps that must be taken if the ADSs are found to be harmful. The presumption embedded in the PWD is that if the assessment finds that the evaluated systems detrimentally impact workers’ fundamental rights or violate the labor laws of a particular member state, the firm will then refrain from deploying the system. But many of the harms experienced by platform workers—including those that arise from algorithmic wage-discrimination practices and automated termination practices—do not necessarily violate any existing fundamental rights or the labor rights enumerated by member states. For example, if an ADS uses personal data to determine a worker’s wages, as long as the wages do not fall below the minimum wage and as long as they do not differentially impact workers based on protected identities, they are not per se unlawful under existing employment laws. Indeed, even though such algorithmic wage discrimination has clearly identified harms to workers—such as increasing income uncertainty110 and workforce division111—an impact assessment by a platform company is not likely to capture these harms or consider them when deploying the systems, in large part because they serve the firm’s profit interests.

The PWD also contains transparency obligations in relation to AMSs and ADSs used by the platform company. On their face, these obligations are stronger than those embodied in the GDPR because they place an affirmative obligation upon the platform companies rather than relying on workers to exercise these rights. Per the directive, platform companies must provide information to workers

in relation to automated monitoring systems and automated systems which are used to take or support decisions that affect persons performing platform work, such as . . . their access to . . . work assignments, their earnings, their safety and health, their working time . . . , their promotion or its equivalent, and their contractual status, including the restriction, suspension or termination of their account.112

This may not only force firms to make their algorithmic logics public, but also make the implications of such systems the subject of public debate and contention. Still, the nature of machine-learning systems puts this outcome in question.113

Though the PWD has yet to be adopted by member states, we can make some predictions about its effects. First, because the PWD extends greater digital rights to “platform workers” than to other workers, the directive may invite firms to engage in definitional arbitrage not only with respect to whether their workers are “employees” but also as to whether they themselves are “platform companies,” thus undermining the potential impact of the law’s assessment and transparency obligations. Second, even assuming proper classification, there is reason to be concerned about the directive’s ability to curb harms caused by ADSs. As the case studies discussed in Part III show, transparency and information-sharing on their own are not immediately useful in the context of a workplace in which digital systems are constantly changing and in which firms rely on these systems to create competitive market advantages.

The most promising parts of the PWD are its outright prohibitions, not only because they affirmatively protect workers from technologies currently causing extensive harms across the EU, but also because they gesture toward the possibility of an alternative approach to ADSs and AMSs in which data laws reach beyond transparency to focus on direct harm avoidance. Indeed, an absolute ban on certain data-processing systems may be appropriate when the outcome of deploying such systems is likely to be fundamentally at odds with fair, equitable, and secure work. This idea is further developed in Part III.

TABLE 2. summary of key data rights afforded to workers under the pwd