AI in Hiring: a High-Risk Use Case In Practice
The use of AI in recruitment, including CV screening, candidate ranking, and shortlisting, raises several legal considerations under the General Data Protection Regulation and the EU AI Act.
While often presented as “assistive” tools, in practice, these systems can have a material impact on employment opportunities, placing them under increased regulatory scrutiny.
Automated decision-making under GDPR
Under Article 22 GDPR, decisions based solely on automated processing that produce legal or similarly significant effects, including hiring decisions, are restricted.
In this context, the European Data Protection Board (EDPB) has clarified that human involvement must be meaningful, meaning that it must involve a genuine assessment and the ability to alter the outcome.
In practice, this threshold is not always met because human review is limited to validating or confirming system outputs, thus, depending on the context, the decision may still be considered effectively automated.
DPIA requirements in recruitment contexts
Under Article 35 GDPR, organisations are required to carry out a Data Protection Impact Assessment (DPIA) where processing is likely to result in a high risk to individuals.
AI systems used in recruitment frequently meet this threshold, particularly where they involve:
profiling or scoring candidates
automated filtering or ranking
processing at scale
In practice, the absence of a DPIA is a recurring issue in regulatory reviews.
AI Act: classification as high-risk systems
The EU AI Act introduces a more structured approach. AI systems used in employment, including recruitment, selection, and evaluation, are considered high-risk.
This classification triggers obligations that go beyond data protection, including:
Implementation of a risk management system
Ongoing assessment of risks to individuals
Documentation and traceability
Human oversight mechanisms
Importantly, these obligations apply regardless of whether the system is developed internally or sourced from a vendor.
The role of AI risk assessments
Taken together, these frameworks point to the following requirement: organisations should be able to demonstrate that risks associated with AI use have been identified, assessed, and mitigated.
This typically requires:
a DPIA under GDPR, where applicable
a broader AI risk assessment framework aligned with the AI Act
The AI risk assessment framework should consist of an ongoing process linked to how the system functions in real conditions.
Practical considerations
From an operational perspective, organisations should focus on:
understanding how the system influences decision-making outcomes
ensuring that human oversight is substantive rather than formal
documenting the logic, risks, and limitations of the system
assessing vendor tools independently, rather than relying solely on contractual assurances
These elements are often determinative in how regulators assess compliance.
The use of AI in hiring is not prohibited and should not stop companies from becoming more efficient and implementing employment growth strategies; however, it is treated as a high-risk use case under EU law.
—
The content of this article is general information, not tailored legal advice for your specific situation. It has a strictly informative and general purpose; the information contained does not constitute legal advice.
Every business is different. For personalized consultancy, schedule a consultation call or write to us directly at 📧 anamaria@legallyremote.online.