AI in Recruitment: What Employers Need to Know About Automated Decision Making.
AI in Recruitment: What Employers Need to Know About Automated Decision Making.
The use of artificial intelligence (AI) in recruitment is increasing rapidly. Many organisations now use AI-enabled tools to screen CVs, score candidates, run assessments, and analyse behaviour.
These tools can improve efficiency, support consistency, and help reduce bias. However, they also raise important legal and practical issues, particularly where decisions are being made with little or no human involvement.
Recent guidance from the UK Data Protection regulator, alongside developments in Ireland and the EU, makes it clear that organisations need to take a more structured approach.
Automated Decision-Making in Recruitment
A key issue is that many organisations do not realise they are using automated decision-making (ADM).
Tools described as “supporting decisions” are often, in practice, making decisions about candidates. This is particularly relevant where systems are used to filter applications, rank candidates, or determine progression.
Where there is no meaningful human involvement, this is likely to be classed as solely automated decision-making, even if a person sits somewhere in the process.
What Counts as Meaningful Human Oversight and Involvement?
For human oversight and involvement to be valid, it must be real.
The person involved must understand how the system reached its decision, have the authority to change it, and actively influence the outcome before it takes effect. If this isn’t being met, the process may count as automated. In high-volume recruitment, this level of involvement can be difficult to achieve consistently unless it is built in intentionally.
Developments in the UK
The Data (Use and Access) Act introduces a more flexible approach to automated decision-making.
Previously the law in this area (Article 22 UK GDPR) treated automated decision-making as largely prohibited, with few exceptions. The DUAA reframes this position. It creates a right to challenge automated decisions, supported by safeguards instead of prohibiting it.
With the developments in AI in this area, the focus is now on allowing it with the right safeguards in place. This includes transparency, the ability to challenge decisions, and appropriate protection of individual rights.
This gives organisations more scope to use AI but also increases the expectation that it is properly managed.
Developments in Ireland
In Ireland and across the EU, the position is defined.
The EU AI Act classifies recruitment systems as high-risk, meaning organisations must have clear governance, human oversight, risk assessments, and documentation in place.
These requirements sit alongside GDPR and apply to organisations recruiting within the EU, including UK-based employers hiring EU candidates.
Key Areas Employers Should Focus On
Where AI is used in recruitment, there are some core areas to get right:
Lawful basis
Ensure there is a valid legal basis for processing candidate data. Legitimate interests may apply, but stricter rules apply where special category data is involved.
Transparency
Be clear with candidates about how AI is used and how decisions are made. This needs to be upfront and easy to understand. We would suggest this is explained in your Applicant Privacy Notice.
Safeguards
Candidates must be able to request human review, challenge decisions, and provide additional information.
Fairness and bias
Understand how tools are tested, monitor outcomes, and question vendors on how bias is managed. Those using these tools need to know this and have full understanding, ATS providers should be able to provide this and you should carry out a new DPIA with this in mind.
DPIAs
Carry out meaningful impact assessments that reflect how the system actually works, not just a high-level exercise in order to gain a full understanding.
What This Means in Practice
For many organisations, AI has been introduced as part of software developments, without a full internal review of how it impacts compliance with Data Protection processes and communication for that organisation.
A practical starting point is to:
- Map where AI is used in recruitment
- Identify where decisions may be automated
- Review the level of human involvement
- Check transparency and most importantly documentation – Data Protection Policy and/or Privacy Notices.
This is particularly important where tools directly affect candidate outcomes.
Final Thoughts
AI has clear benefits in recruitment, and it’s increasingly needed to manage the volume of applicants, but it needs to be used with proper oversight.
One issue we are increasingly seeing is that data protection policies and privacy notices state that automated decision-making is not used, when in reality it may be happening.
Organisations should review these documents to ensure they reflect actual practice, and carry out DPIA’s so they can update those documents to clearly explain where AI is used and what safeguards are in place.
Getting this right is a simple but important step in reducing risk and maintaining trust.
At A Human Resource, we support organisations across the UK and Ireland in reviewing and implementing AI, including providing compliance and regulatory guidance and building practical governance frameworks. Get in touch to discuss how we can support you.