Loading profile data...

BLOG

Responsible AI: Why ethics matter in artificial intelligence

Responsible AI: Why ethics matter in artificial intelligence
Nicola Cain
Nicola CainHandley Gill Limited

Posted: Tue 7th May 2024

While artificial intelligence (AI) is an emerging technology and is not currently subject to specific regulation in the UK, that doesn't mean existing laws and regulations don't apply.

From laws and regulations on data protection, human rights, including privacy and the right to be free from discrimination, and intellectual property, the development and use of AI models need to be compliant.

The mere fact that an AI model is available to use in the UK doesn't mean that using it is lawful or compliant or that it doesn't expose your organisation to liability.

With many AI developers offering free access to their tools, often subject to terms and conditions, which enable them to use prompts and data uploaded to train the AI.

This makes users wholly liable for any adverse consequences. Staff can be tempted to submit confidential, commercially sensitive information and/or personal data to AI models that may be operated overseas, which can unwittingly result in legal liability for their employers.

Ensuring that your organisation has considered whether and how staff are to be allowed to use AI, that they receive appropriate training and that controls, governance and oversight mechanisms are in place, will empower your organisation to capitalise on the opportunities offered by AI, while doing so safely, responsibly and ethically.

It is sensible to take a risk-based approach to the use of AI models and identify preferred suppliers where staff are permitted to use AI. For low-risk uses of AI, such as those which don't involve any personal data, IP, confidential information or decision-making, an approach limited to staff training and appropriate controls may be sufficient. For higher-risk uses of AI, a more in-depth approach to compliance and governance will be required.

Being transparent about your use of AI and your approach to compliance can support the building of trust on the part of those who are affected by the use of AI, such as your employees, customers or third parties.

Watch this webinar to to learn how to harness the power of AI:


These are steps you can take to ensure that your organisation's approach to AI is safe, responsible, ethical and legally compliant:

  1. Brief the board or other organisational leadership on artificial intelligence (AI) opportunities and risks

  2. Secure a strategic decision as to the organisation’s AI risk appetite

  3. Prepare and publish a policy on the use of AI, detailing what AI tools can be used, for what use cases, upon what conditions and subject to what safeguards

  4. Provide training and wider awareness raising to staff on AI and on the specific AI tools they can access

  5. Block access via your network to unauthorised AI tools

  6. Identify each intended AI use case and conduct an AI risk assessment in relation to each

  7. Only authorise the use of AI tools appropriate to the risk of each relevant use case

  8. Design and procure AI models with built-in safeguards

  9. Establish a gateway process for AI procurement/deployments

  10. Implement safeguards appropriate to the nature and scale of risk

  11. Consult with affected individuals in advance where possible, or with appropriate representatives or stewards to represent their interests

  12. Establish a beta phase of testing AI tools, running them alongside traditional approaches to identify benefits, as well as any divergence which may reveal potential deficiencies

  13. Be transparent about the use of AI, particularly where individuals are affected

  14. Test and monitor the operation of the AI model to confirm its accuracy, reliability and propriety

  15. Establish a reporting mechanism to enable users to report unexpected or inappropriate outcomes and act upon reports

  16. Monitor the way AI is being used in practice and whether it is impacting user behaviour in unexpected ways

  17. Ensure AI is not the sole mechanism for decision-making impacting individuals

  18. Establish a governance and oversight mechanism

  19. Consider the impact of the use of AI by your supply chain, both for your staff and for your organisation’s risk profile

  20. Remember to consider and meet wider legal and regulatory compliance obligations specific to your industry and/or use of AI

  21. Engage an iterative approach, reviewing and revising your safeguards and governance to reflect emerging risks and regulation

  22. Establish a process for individuals to raise concerns and seek redress

Relevant resources

Nicola Cain
Nicola CainHandley Gill Limited

You might also like…

Get business support right to your inbox

Subscribe to our newsletter to receive business tips, learn about new funding programmes, join upcoming events, take e-learning courses, and more.