Governance model for ethical AI

AI governance involves a comprehensive set of policies, practices, and structures designed to ensure responsible AI by design. This concept extends beyond legal compliance, aiming to align AI initiatives with ethical principles, societal values, and organisational strategies. The governance framework balances technical efficacy, data integrity, user privacy, and societal impacts, emphasizing the need for AI systems to be transparent, equitable, and accountable. The overarching goal is to integrate AI technology in a way that supports strategic objectives while maintaining a strong commitment to ethical standards and stakeholders’ well-being.

Integration and operationalisation

Stakeholder inclusivity: A diverse range of stakeholders is crucial for comprehensive AI governance. This may include C-Suite executives for strategic oversight, legal, policy and regulatory teams for compliance, research and innovation, product teams and data engineers for technical insights, and “environmental, social, governance” (ESG) professionals for societal impact considerations. Such inclusivity ensures a holistic approach to AI governance, reflective of varied organizational perspectives.

Frameworks and accountability: AI governance should not operate in isolation but rather be an extension of existing organisational processes. Many companies already possess robust data governance systems, which can be adapted to encompass AI-specific ethical considerations. Alongside this adaptation, it's crucial to define clear responsibilities and establish precise procedures within the existing operational framework. This sometimes involves giving new responsibilities to empower existing roles, such as RAI Champions, who are specifically tasked with overseeing AI-related concerns and ensuring adherence to ethical standards. This approach ensures consistency and leverages established best practices.

Agile and adaptive governance: The dynamic nature of AI technology necessitates a governance model that is both agile and adaptive. As AI systems evolve, so too should the governance frameworks, accommodating new ethical challenges and technological advancements. This continuous evolution underscores the need for governance structures that are not only robust but also flexible and responsive to change.

Cultural embedment: Through educational initiatives and awareness campaigns, the aim is to enhance AI ethics understanding across the organization. Top management's involvement in AI governance emphasises its critical importance, ensuring the integration and prioritization of responsible AI practices company-wide.

Multilevel governance model

An effective AI governance model is defined by a three-level approach, each playing a distinct yet interconnected role. These levels indicate the process for escalating higher risk or more complex use-cases.

First Level - operational implementation: The governance process starts at the operational level within business units or product management teams. These units undertake the initial kick-off of AI projects and conduct primary risk assessments. Responsible AI ‘champions' within these units are vital, tasked to support product managers and, with embedding governance practices, ensuring AI projects comply with ethical standards, and promoting a responsible AI culture.

Second Level - ethical decision-making and escalation: At this level, the AI ethics committee advices and arbitrates primarily when support is needed for more nuanced evaluation and resolution of identified risks or ethical concerns escalated from the operational level. Operating as a pivotal platform for ethical oversight and decision-making, this committee evaluates ethical risks, offers resolutions to ethical dilemmas, and ensures that AI applications align with both organizational values and societal norms. This escalation process is integral to the governance model, ensuring thorough consideration of complex ethical issues and more efficient decision-making with fewer bottleneck issues.

Third Level - executive oversight and final decision-making: In cases where an AI system poses significant risks to people, society, or the environment, and consequently the company's reputation, the issue is escalated to an executive board. This board, comprising senior executives, holds ultimate responsibility for the final decision. Escalation to this level is reserved for instances that pose substantial risks, ensuring that decisions made at this stage are treated with the utmost seriousness and consideration.

In some instances, AI governance might be further bolstered by critical supporting functions that can be seen as additional support:

  • Assurance and compliance teams play a vital role in some organisations. They are responsible for establishing governance guidelines, conducting regular monitoring, and ensuring adherence to ethical standards, thus maintaining AI applications within the bounds of both organisational and regulatory norm, all with the ultimate aim to ensure responsible AI.
  • Equally important is the role of independent oversight, which includes internal and external audit systems and acts as the third and last line of defence. These audits provide an objective assessment of AI practices, ensuring the integrity and effectiveness of the governance framework.

Together, these supporting functions form a comprehensive network that upholds the standards of responsible AI governance.

Charting the future of responsible AI governance

The journey towards responsible AI governance is an ongoing endeavor that demands a robust, dynamic, and adaptable model. This model, characterized by multi-level stakeholder engagement, operational integration, ethical decision-making, and cultural embedment, offers a comprehensive framework for the responsible AI by design. As AI technology continues to advance, governance models must evolve concurrently, ensuring the responsible and beneficial use of AI while mitigating and managing potential risks effectively.

Further reading about the governance model designed for the escalation of ethical issues, and also to learn about mobile operators examples, can be found in the AI Ethics Playbook, Chapter 2.

“At Orange, we are convinced that AI ethics is not negotiable; it is the foundation of our AI strategy. We are now organizing ourselves with the support of our group Data and AI Ethics council and per country local AI ethics referent to adapt methodologies and tools. Steve Jarrett, Senior Vice President, Data and AI Orange Innovation