Implementing AI ethical principles into everyday activity represents a change from business as usual for most organisations. Some of the considerations are:
What tools and strategies are essential for establishing and addressing potential risks when initiating a new AI project?
A self-assessment questionnaire (SAQ) is a tool designed to help you bridge the gap between ethical principles and business practice and, with it, assess an AI system. The GSMA, together with a group of mobile operators, have developed this tool that helps establish and address the risks that an AI system may pose.
The questions in the SAQ are structured around the ethical principles outlined in section 5.2. It asks some probing questions to help explore whether an AI system is compatible with the principles, and take further actions based on your answers. The tool is encouraged to be completed at the beginning of the AI project and reviewed at any stage of a project, as many times as needed.
The three objectives of conducting this SAQ are:
- Evaluate overall risk level for a specific use-case, classifying it as high, minimal or limited.
- Answer the relevant ethical questions. The SAQ takes you through each of the ethical principles, with the specific questions differentiated depending on the risk level. The lower the risk, the fewer questions you will be asked to complete. In addition, SAQ suggests further actions when gaps are identified.
- Record information to help you track status, report, and conduct future planning and potential auditing.
You can find further examples of the technical tools in Chapter 3 of the AI Ethics Playbook.
What processes are essential for integrating responsible AI in business as usual?
Establishing responsible AI practices as standard in business operations demands a multifaceted strategy. It's about cultivating an environment where ethical AI is not just an aspiration but a regular aspect of daily work. This involves a comprehensive blend of education, technological innovation, diligent monitoring, and external validation. Each of these elements plays a crucial role in transitioning from traditional methods to an ethically aware, AI-integrated business model, ensuring that the principles of responsible AI are embedded deeply and effectively in the organisation's core practices.
Educational initiatives: Recognising the importance of awareness and skill development in ethical AI practices, training programs are increasingly being implemented. These initiatives aim to elevate understanding and competency in AI ethics across all organizational levels.
Technological innovation: Technical advancements in AI, particularly in areas addressing biases, enhancing explainability, and promoting energy-efficient AI solutions are at the forefront of innovation. These efforts not only improve those solutions but also align them with broader societal and environmental concerns.
Monitoring tools: The development of dashboard initiatives to monitor AI systems is an integral part of ensuring compliance with ethical and governance standards. These tools provide real-time insights into AI operations, facilitating proactive management of potential risks and ethical dilemmas.
Accreditation: Pursuing a responsible AI accreditation roadmap motivates organisations to meet higher ethical standards, as these often require specific criteria to be met. This not only ensures adherence to best practices but also offers a competitive edge, signalling a commitment to responsible AI to clients, investors and employees alike, enhancing the organisation's market reputation.
In chapter 3 of the AI Ethics Playbook you will find resources designed to help educate and engage organisations about different methods of approaching ethical conversations.
For those who seek deeper insights or have additional queries regarding the approaches to conversations in responsible AI, we warmly invite you to reach out to us. Our team is available to provide further information, clarify doubts, and engage in more detailed discussions to assist your organisation in navigating the complexities of responsible AI.
“Digital leadership is a cornerstone of our T25 strategy and we’re committed to the ethical use of Artificial Intelligence in our operations and in our customer interactions. Our collaboration with GSMA and other top global mobile operators allows us to work together to protect customers and employees, remove any entrenched inequality and ensure that one of our most advanced technologies operates reliably and fairly for all our stakeholders”
Nikos Katinakis, Telstra Group, Executive Networks & IT