Data quality is fundamental to AI
AI systems are only as good as their data, a fundamental requirement of ethical AI is therefore high-quality, complete, accurate and well-managed data.
Governance: Distinguish clear AI roles
The key to good governance of AI systems within an organisation is a clear allocation of roles and responsibilities, and defined channels to escalate concerns. This will help to ensure accountability and enable people to raise concerns.
Adopting a lifecycle approach
Organisations using AI systems should adopt a lifecycle approach to managing risks. It is essential that organisations using AI systems have mechanisms to identify and mitigate risks posed by AI at each stage of the lifecycle.
Inclusive design and diversity
Incorporating people with diverse perspectives and lived experiences – including those of traditionally underrepresented groups and backgrounds – can help to anticipate the needs and concerns of users who may be impacted by AI systems.
Anticipating future contexts of the application
Organisations using and developing AI systems need to consider not only the present-day application of a given AI system but how its use could evolve in the future and how this may impact people.
Policy, controls and compliance
Organisations need clear mechanisms and controls in place in order to comply with data and AI ethics guidelines and regulations. It’s vital they follow evolving national and global regulations and requirements to understand what is required of them.
Training and raising awareness
It’s important for organisations to establish awareness and training requirements in relation to AI, identifying skill gaps and enabling a consistent approach to upskilling. AI literacy should be built throughout the organisation. Enabling the entire workforce to understand what AI is, how it impacts their jobs, the company, and wider society is a part of building confidence in AI and driving adoption and usage.
For further recommendations please download the AI Ethics Playbook :