Ethical AI Implementation
As artificial intelligence (AI) becomes more embedded in the daily operations of businesses, in order to keep up, the need to implement it is no longer optional but essential. With the recent adoption of the EU AI Act, Europe has taken a global leadership role in shaping responsible AI use, with other countries set to follow in it’s footsteps with similar legislation, but it’s important to ensure AI is implemented ethically.
Europe’s approach to AI is grounded in values like transparency, and respect for human rights. The EU AI Act, alongside existing regulations like the General Data Protection Regulation (GDPR), sets a clear legal and ethical framework.
This means that in the EU, ethical AI is not just a matter of good practice, it’s a legal requirement. UK businesses with EU operations may also need to comply or face fines that are greater than those of the GDPR, for the EU AI Act these are up to 7% of annual turnover or €35 million (whichever is greater).
Even if UK businesses do not fall under the catchment of the EU AI Act, by following it’s principles and processes they can protect their people and businesses.
Below are some tips on how organisations can introduce AI systems that are not only effective but also ethical and compliant:
1. Understand the principles of the EU AI Act
The EU AI Act classifies AI systems into four risk levels:
- Unacceptable risk (e.g. sentiment analysis): Banned in the EU.
- High risk (e.g. recruitment systems): Heavily regulated.
- Limited risk (e.g. chatbots)
- Minimal risk
Under the Act, all AI systems used in organisations require classification, for high-risk systems robust documentation, human oversight, and risk management plans are required.
2. Establish Ethical AI Governance
Good governance is the backbone of ethical AI.
- Setting up an AI Ethics Committee to oversee AI implementation: A cross-functional team to evaluate risks and guide ethical decisions.
- Carry out Ethical Impact Assessments: Assess AI systems might affect individuals and the organisation before deployment.
- Ensure data protection is paramount when handling personal data.
3. Engage Stakeholders, Build Trust & Obtain Buy-in
AI impacts people, so include them in the conversation.
- User Engagement: Involve users and communicate proposals from the outset, explain ethe reason(s) for introducing AI. Explain the expected benefits and any expected changes that will take place.
- Transparency Reports: Share how your AI is being/ will be used and its impact.
- Training: Provide teams with effective training on how to use AI.
- Update Job Descriptions to reflect the future roles and responsibilities.
- Update People Strategies to reflect the future needs of the business.
4. Be Transparent
Transparency is key, organisations should be able to explain how AI systems work.
- User Disclosure: Ensure Policies and Privacy Notices let users know when they’re engaging with an AI system.
- Explainability: Offer clear, understandable explanations of how the AI system operates and makes decisions.
- Documentation: Keep detailed and accurate records, carry out risk assessments and carry out tests.
5. Ensure Accountability and Oversight
Accountability means being answerable for AI outcomes—good and bad.
- Assign Responsibility: Identify who is responsible for each AI system.
- Human Oversight: Ensure humans can override critical decisions.
- Logging and Auditing: Keep logs to track decision making and put in place the processes to enable investigations if something goes wrong.
The ISO has a specific Framework for AI which is useful: ISO/IEC 42001:2023 - AI management systems
By following the principles of change management and Implementing AI ethically you can build trust with your workforce. At A Human Resource, we combine deep HR expertise with legal and technical knowledge of AI systems and the EU AI Act. We advise and guide our clients to ensure due diligence and the ethical deployment of AI. Find out more here Artificial Intelligence | A Human Resource | AHR