AI and data privacy

The EU AI Act applies to users of AI systems that affect people in the EU, including employers using AI for decision-making about EU-based employees. The AI Act overlaps, in part, with the EU GDPR. But note: complying with one doesn’t guarantee compliance with the other.

Key takeaways

  • We’re beginning to see legal regulation of AI for the first time, notably through the EU AI Act.
  • Countries such as Italy are introducing additional restrictions on how employers use AI in the workplace. In other regions, particularly in Asia Pacific, governments are issuing guidelines on using AI systems.
  • In other countries, the focus is on data protection more generally. Germany is considering an Employee Data Act.
Quote

Bias is a particular risk if AI tools are trained on datasets that inadvertently favor certain groups.

AI’s much-hyped potential continues to attract scrutiny. The ability of artificial intelligence (AI) to mimic human intelligence – what it’s often most lauded for – poses challenges for employers, especially global businesses. This has wide-ranging implications, both opportunities and threats. On the one hand, there’s scope for cost reductions, improved efficiency, and data-driven decision-making. On the other hand, these same opportunities create issues, including those related to bias, data privacy, and accountability. Bias is a particular risk if AI tools are trained on datasets that inadvertently favor certain groups, resulting in biased hiring decisions and breaches of anti-discrimination laws.

The EU Artificial Intelligence Act, the first-of-its-kind globally, came into force in August 2024 and has a tiered implementation timeline. From 2 February 2025, the Act bans emotion recognition AI systems in workplaces (unless for medical or safety reasons). These systems analyze facial expressions, tone of voice, and other signals to identify a person’s emotional state. 

  

The EU AI Act takes center stage

Companies using AI must make sure their employees are AI literate from the same date. This will involve employers ensuring a sufficient level of AI literacy of staff and other persons dealing with the operation and use of AI systems, including how to deploy AI systems and how to manage and mitigate associated risks. From 2 August 2026, strict rules apply to employers that use high-risk AI systems. For example, those used for recruitment, worker management, or access to self-employment, as well as AI systems used to recruit, select, manage, or evaluate employees, even if simply for CV sorting. 

Notably, the AI Act has extraterritorial reach. Multinationals that use AI systems outside the EU in their decision-making about workforces inside the EU could find themselves caught by the Act. In other words, global companies with global people management systems need to beware. Breaching the Act could lead to a fine of up to €35 million or 7 percent of global annual turnover, whichever is higher, as well as reputational damage.

   

Country-level laws

As a regulation, the Act doesn’t need national implementing legislation. But countries are still dealing with the issue at a local level. Italy, on 23 April 2024, proposed a bill to introduce country-specific provisions on AI. Three articles in the bill relate to employment. Article 10 requires AI used in the workplace to be safe, reliable, and transparent and not to infringe confidential personal data. Article 11 establishes a center to monitor AI in the workplace to maximize the benefits and minimize the risks. And Article 12 provides that in intellectual professions, human critical thinking must prevail over the use of AI and may only concern supporting professional activities. We expect to see further local proposals as the Act comes fully into force. Other countries, such as Germany, already protect employees by mandating co-determination rights for works councils when employers introduce AI-driven systems, including employee monitoring.

The United States doesn’t have a federal law governing the use of AI, and cities and states have passed their own laws to control or limit it. New York City, for example, has taken steps to combat inadvertent discrimination stemming from AI tools used in recruitment. Employers must tell applicants if AI tools are used and must perform yearly bias audits to ensure algorithms aren’t discriminating against applicants with certain protected traits. In practice, employers have largely been able to take steps to avoid the application of the New York City law, though states such as Colorado and Illinois have passed alternative and broader AI laws that apply to employment.

    

Employees’ rights to privacy

Data privacy, a close companion of AI, particularly given the volume and type of data used to train AI systems, remains an equally significant topic. Employers planning to use AI for decision-making must consider the EU GDPR, which includes the right for job applicants and employees not to be subject to decisions based solely on automated processing. In effect, the GDPR bans hiring and firing decisions made without human oversight. So, employers must use human judgment throughout the decision-making process, and definitely before a final decision. 

Germany has been trying to introduce a dedicated employee data protection act since the 1980s. The latest attempt, a draft Employee Data Act published in October 2024, addresses the processing of employee data before, during, and after employment relationships. The draft bill is wide ranging. It covers consent, for example, when publishing photos on a company intranet; it clarifies the requirements for employee data protection provisions in collective agreements, such as works council agreements; and it grants co-determination rights when appointing and dismissing internal and external data protection officers. 

Passing the new law this year seems unlikely given its early legislative status, the current political conditions, and the upcoming federal election. But the draft bill does paint an intriguing picture of what the future might hold.

   

Guidelines, not legislation

Unlike the EU, few other regulatory frameworks have formal regulations to govern AI in the workplace or AI use in general. But, Singapore has issued guidelines that clarify how its Personal Data Protection Act applies to the use of personal data in AI systems. The guidelines, which emphasize meaningful consent, cover systems employers use to optimize processes and develop insights from data collected on their employees. 

Likewise, the Indonesian government, in December 2023, issued two sets of guidelines on the use of AI: the Financial Services Authority Ethical Guidelines, which apply to the fintech sector, and the Ministry of Communication and Informatics (MOCI) Circular Letter No. 9, which applies to all public and private electronic system operators who take part in AI-based programming. The MOCI emphasizes inclusivity, non-discrimination, and transparency in the programming of AI systems. This may signal the direction the country intends to travel on employers’ use of AI in the workplace. 

The United Kingdom is another jurisdiction relying on guidelines rather than formal legislation to shape employers’ approaches to AI. The UK Information Commissioner’s Office, on 6 November 2024, published AI tools in recruitment, after auditing organizations that develop or provide AI tools used in recruitment. The guide recommends ways to mitigate privacy and related risks of bias and unfairness, and it explains why seemingly harmless practices, such as scraping personal data from social media or job-networking sites, risks non-compliance under the UK GDPR.

    

Monitoring misconduct

An important principle of data privacy worldwide is transparency about how employee data is collected and used. This creates tension when employers aren’t transparent about employee monitoring that reveals serious employee misconduct. Different countries apply different standards, and the balance often shifts across time. Germany is restrictive: employers simply can’t rely on this type of information, unless their interests in using the data outweigh the interests of the employees. France historically took a similar approach, but that recently changed dramatically, with a Supreme Court decision allowing employers, under strict conditions, to rely on covert evidence to justify dismissals. The position is similar in the United Kingdom, with courts and tribunals typically more interested in what the evidence shows than how it was obtained.

Quote

Multinationals that use AI systems outside the EU in their decision-making about workforces inside the EU could find themselves caught by the [EU AI] Act.

Related content