Artificial intelligence (AI) is revolutionising the recruitment process, offering efficiency and scalability in sourcing, screening, and selecting candidates. However, as with any powerful technology, AI in recruitment must be deployed responsibly to avoid risks related to privacy, bias, and data security. Recognising these challenges, the UK Information Commissioner’s Office (ICO) recently conducted audits on AI tools used in recruitment, with a focus on compliance with data protection laws and ethical best practices.
Understanding the ICO’s AI audits
The ICO’s audits examined AI tools utilised in recruitment, particularly those used for:
- Sourcing candidates – Matching individuals to job roles and enhancing workforce diversity.
- Screening applicants – Assessing skills and predicting job interest.
- Selection processes – Evaluating candidates using methods like behaviour-based games, psychometric tests, and video interviews.
Notably, biometric technologies and generative AI were excluded from this review. The primary goal of the audits was to assess privacy risks, data protection compliance, and ethical considerations surrounding AI deployment in recruitment. While AI brings undeniable benefits, including speed and efficiency, it also introduces potential issues such as bias, unfair decision-making, and security vulnerabilities.
The ICO emphasised that AI systems must adhere to strong data protection standards aligned with the UK’s AI regulation principles—transparency, fairness, accountability, and security. Ensuring compliance in these areas allows businesses to innovate responsibly while maintaining public trust.
Key areas of focus and ICO’s recommendations
To support businesses in using AI ethically in recruitment, the ICO provided detailed recommendations in five critical areas:
1. Fairness in AI decision making
Ensuring fairness in AI-driven recruitment is essential to avoid discrimination or unintended bias. The ICO underscored that:
- AI providers and recruiters must monitor systems for fairness, accuracy, and potential bias.
- Being more accurate than random selection does not automatically make AI fair—fairness depends on decision-making processes and human oversight.
- When handling sensitive data (such as special category data), it must be detailed enough to assess bias effectively and comply with data protection laws.
2. Transparency and explainability
Transparency is fundamental in AI usage. Candidates must clearly understand how their personal data is used in recruitment decisions. The ICO recommends:
- Recruiters should provide candidates with clear privacy notices detailing how AI processes their data.
- AI providers must supply technical details on how their systems work to facilitate transparency.
- Contracts between recruiters and AI providers should define responsibility for communicating privacy information to candidates.
- Recruiters must fully understand AI decision-making processes to ensure they can explain them effectively to applicants.
3. Data minimisation and purpose limitation
AI systems should process only the minimum personal data necessary for their intended function. The ICO advised that:
- AI providers should evaluate the minimum data needed for developing, training, testing, and operating their systems.
- Data should be used solely for its intended purpose and should not be stored, shared, or repurposed.
- Recruiters should complete a data protection impact assessment (DPIA) before processing personal data to identify and mitigate risks. DPIAs must be kept up to date through ongoing monitoring.
4. Data protection impact assessments (DPIAs)
A DPIA is a critical compliance tool for AI in recruitment. The ICO emphasised that:
- AI providers and recruiters must conduct a DPIA before deploying high-risk AI systems.
- The DPIA should assess privacy risks, outline mitigation measures, and balance privacy against other interests.
- Even AI providers acting as processors should conduct DPIAs as best practice to demonstrate accountability in managing privacy risks.
5. Data controller and processor roles
Clearly defining the roles of AI providers and recruiters in data processing is crucial for accountability. The ICO clarified that:
- Recruiters typically act as data controllers, meaning they bear primary responsibility for compliance.
- AI providers can be controllers if they determine how personal data is used (e.g., if they use recruitment data to build a central AI model shared across clients).
- Contracts must specify whether AI providers act as controllers, joint controllers, or processors to delineate responsibility for data protection and compliance.
- Recruiters, as controllers, must provide clear instructions to AI providers on data handling and regularly verify compliance.
6. Lawful basis for processing personal data
AI providers and recruiters must ensure they have a lawful basis for processing candidate data. The ICO recommends:
- Determining the lawful basis for processing personal data before any AI-driven recruitment begins.
- Identifying an additional legal condition if handling special category data.
- Clearly documenting and explaining the lawful basis in privacy notices and contracts.
- If relying on legitimate interests, conducting a legitimate interests assessment (LIA).
- If using consent, ensuring it is specific, opt-in, clear, properly recorded, regularly refreshed, and easy to withdraw.
Balancing AI innovation with ethical responsibility
AI offers unparalleled efficiency in recruitment, but it is prone to built-in biases and privacy risks that require continuous monitoring. To ensure compliance with UK GDPR, recruiters and AI providers must prioritise fairness, transparency, and accountability. This includes:
- Regularly monitoring AI for bias and mitigating potential discrimination.
- Balancing data collection with data subject rights.
- Documenting decision-making processes to demonstrate compliance.
- Conducting routine DPIAs to manage privacy risks effectively.
- Implementing data minimisation strategies to avoid excessive data collection.
By following these principles, companies can harness the power of AI while ensuring ethical, transparent, and legally compliant recruitment processes that foster trust among job candidates and regulators alike. For additional help with AI compliance, contact a member of our team today who can support you and your business further.