Responsible AI: Why ethics matter in artificial intelligence
Posted: Tue 7th May 2024
While artificial intelligence (AI) is an emerging technology and is not currently subject to specific regulation in the UK, that doesn't mean existing laws and regulations don't apply.
From laws and regulations on data protection, human rights, including privacy and the right to be free from discrimination, and intellectual property, the development and use of AI models need to be compliant.
The mere fact that an AI model is available to use in the UK doesn't mean that using it is lawful or compliant or that it doesn't expose your organisation to liability.
With many AI developers offering free access to their tools, often subject to terms and conditions, which enable them to use prompts and data uploaded to train the AI.
This makes users wholly liable for any adverse consequences. Staff can be tempted to submit confidential, commercially sensitive information and/or personal data to AI models that may be operated overseas, which can unwittingly result in legal liability for their employers.
Ensuring that your organisation has considered whether and how staff are to be allowed to use AI, that they receive appropriate training and that controls, governance and oversight mechanisms are in place, will empower your organisation to capitalise on the opportunities offered by AI, while doing so safely, responsibly and ethically.
It is sensible to take a risk-based approach to the use of AI models and identify preferred suppliers where staff are permitted to use AI. For low-risk uses of AI, such as those which don't involve any personal data, IP, confidential information or decision-making, an approach limited to staff training and appropriate controls may be sufficient. For higher-risk uses of AI, a more in-depth approach to compliance and governance will be required.
Being transparent about your use of AI and your approach to compliance can support the building of trust on the part of those who are affected by the use of AI, such as your employees, customers or third parties.
Watch this webinar to to learn how to harness the power of AI:
These are steps you can take to ensure that your organisation's approach to AI is safe, responsible, ethical and legally compliant:
Brief the board or other organisational leadership on artificial intelligence (AI) opportunities and risks
Secure a strategic decision as to the organisation’s AI risk appetite
Prepare and publish a policy on the use of AI, detailing what AI tools can be used, for what use cases, upon what conditions and subject to what safeguards
Provide training and wider awareness raising to staff on AI and on the specific AI tools they can access
Block access via your network to unauthorised AI tools
Identify each intended AI use case and conduct an AI risk assessment in relation to each
Only authorise the use of AI tools appropriate to the risk of each relevant use case
Design and procure AI models with built-in safeguards
Establish a gateway process for AI procurement/deployments
Implement safeguards appropriate to the nature and scale of risk
Consult with affected individuals in advance where possible, or with appropriate representatives or stewards to represent their interests
Establish a beta phase of testing AI tools, running them alongside traditional approaches to identify benefits, as well as any divergence which may reveal potential deficiencies
Be transparent about the use of AI, particularly where individuals are affected
Test and monitor the operation of the AI model to confirm its accuracy, reliability and propriety
Establish a reporting mechanism to enable users to report unexpected or inappropriate outcomes and act upon reports
Monitor the way AI is being used in practice and whether it is impacting user behaviour in unexpected ways
Ensure AI is not the sole mechanism for decision-making impacting individuals
Establish a governance and oversight mechanism
Consider the impact of the use of AI by your supply chain, both for your staff and for your organisation’s risk profile
Remember to consider and meet wider legal and regulatory compliance obligations specific to your industry and/or use of AI
Engage an iterative approach, reviewing and revising your safeguards and governance to reflect emerging risks and regulation
Establish a process for individuals to raise concerns and seek redress