AI Ethical principles

To act ethically, organisations require a guiding framework that explains what good ethical behaviour looks like. As AI has developed, and the potential risks of AI have become clearer, many organisations have written AI ethical principles for this purpose. The GSMA has established the following 7 principles that aim to regroup most of the AI ethical principles adopted by organisations.

Fairness:AI system must not be biased and discriminate against people or groups in a way that leads to adverse decisions or inferences.

Orange has been working to ensure its AI systems are free from bias wherever possible. The company recently audited its supply chain, signed the International Charter for Inclusive AI6 and was the first company to receive a GEEIS-AI label.

Human agency & oversight: It’s important to determine an appropriate level of human oversight and control of an AI system. Organisations must respect human autonomy as AI directs decision-making and people may become reliant on a system.

stc developed an AI solution for field surveyors. By using an app embedded with a pipeline of computer vision models, a surveyor can take a photo of objects such as buildings, power meters etc and automatically populate a description of the photo. The app design follows a ‘human-in-the-loop (HITL)’ approach as the surveyor can validate the AI output by accepting the resulting description if it produces a matching result or rejecting or editing if it doesn’t.

Privacy & Security:AI systems should respect and uphold an individual’s right to privacy and ensure that personal data is protected and secure. Organisations using AI should pay special attention to any additional privacy and security risks arising from AI systems.

Orange is participating in the EU PAPAYA project, which aims to address privacy concerns when data analytics tasks are performed by untrusted third-party data processors. By designing cryptographic modules adapted to the use case, PAPAYA develops dedicated privacy preserving data analytics solutions that enable data owners to extract valuable information from encrypted data while working with third parties.

Safety & Robustness: AI systems should be safe, robust, and reliably operated in accordance with their intended purpose.

Transparency & Explainability: It’s important to be transparent when an AI system is being used, its data and purpose. Explainability is the principle of communicating the reasoning behind a decision in a way that is understandable to a range of people.

Telefónica is committed to telling its customers what type of data is used for AI systems, how it is used and when they are interacting with an AI system. Telefónica ensures decisions made by AI systems are understandable; first by ensuring developers understand the logic behind the conclusions AI draws internally, and second by measures to ensure the appropriate stakeholders have the necessary level of understanding. This is further applied to third-party technologies Telefónica uses

Accountability:Organisations should have a clear governance structure; who is responsible for reporting and decision-making, and thereby ultimately accountable.

Telstra suggests accountability requires identifying who is accountable at different levels of the organisation for: The actions of an AI system; Implementing the system components correctly; and Setting and balancing the system’s objectives. This applies to AI developed in-house and AI purchased from third parties. When Telstra purchases third-party systems, it remains responsible for its performance. Telstra takes steps to ensure these purchased AI technologies are working in line with its ethical principles.

Environmental Wellbeing: AI systems must be mindful of environmental impact throughout their lifecycle and value chain.

‘Human, social and environmental wellbeing’ is Principle 1 of Telstra’s AI ethical principles. When analysing the potential impacts of an AI system, Telstra looks beyond immediate impacts to broader long-term or indirect outcomes to understand the social and environmental impact of an AI system.