AI Ethical Principles

To act ethically, organisations require a guiding framework that explains what good ethical behaviour looks like. As AI has developed, and the potential risks of AI have become clearer, many organisations have written AI ethical principles for this purpose. These include:

Fairness: AI system must not be biased and discriminate against people or groups in a way that leads to adverse decisions or inferences.

Human agency & oversight: It’s important to determine an appropriate level of human oversight and control of an AI system. Organisations must respect human autonomy as AI directs decision-making and people may become reliant on a system.

Privacy & Security: AI systems should respect and uphold an individual’s right to privacy and ensure that personal data is protected and secure. Organisations using AI should pay special attention to any additional privacy and security risks arising from AI systems.

Safety & Robustness: AI systems should be safe, robust, and reliably operated in accordance with their intended purpose.

Transparency & Explainability: It’s important to be transparent when an AI system is being used, its data and purpose. Explainability is the principle of communicating the reasoning behind a decision in a way that is understandable to a range of people.

Accountability: Organisations should have a clear governance structure; who is responsible for reporting and decision-making, and thereby ultimately accountable.

Environmental Wellbeing: AI systems must be mindful of environmental impact throughout their lifecycle and value chain.