Frequently Asked Questions

1. About AI

What is the relationship between mobile connectivity and artificial intelligence (AI)?

The vast expansion in connectivity – with the rollout of 5G and the IoT – is enabling organisations and individuals to collect far more data in real-time. This mobile big data (MBD) can be analysed using conventional techniques and/or employed to enable machine learning to support both general purpose and targeted AI solutions. In effect, these technologies can create a powerful, virtuous circle that can generate immense socio-economic benefits.

How big an impact will AI have on the global economy?

The European Parliament has noted that: “AI can increase the efficiency with which things are done and vastly improve the decision-making process by analysing large amounts of data. It can also spawn the creation of new products and services, markets and industries, thereby boosting consumer demand and generating new revenue streams.”

Consultancy PWC has estimated[1] that “AI could add 14% to global GDP – the equivalent of up to US$15.7 trillion by 2030.” It says the economic impact of AI will be driven by productivity gains from businesses automating processes, as well as augmenting their existing labour force with AI technologies and increased consumer demand resulting from the availability of more personalised and higher-quality products and services. McKinsey, another consultancy, has forecast that AI could add between US$17.1 trillion and US$25.6 trillion of economic value[2] to the global economy.

How is AI being used within the mobile industry?

In telecoms, as with other industries, AI and big data can be used to optimise the core business and the usage of energy and other key resources, while providing better and more personalised experiences for customers[3]. Mobile operators are using AI in many different ways, including support for network planning and upgrades, to optimise network capacity and quality, enable predictive maintenance and bolster network security.

How can the mobile industry support the use of AI more broadly?

Mobile operators and their partners can use AI and big data to enable better city planning, curb and mitigate climate change, improve disaster response and humanitarian aid, increase social and financial inclusion, and support sustainable growth, thereby helping to achieve the UN’s Sustainable Development Goals (SDGs)[4].

Read more about how mobile operators and their partners use AI and MBD analytics in the Use Case section below.

[1] Source:

[2] Source:

[3] Source:

[4] Source: the-next-productivity-frontier#business-value

2. The Ecosystem

What is the role of the GSMA’s AI for Impact initiative?

The GSMA founded the AI for Impact initiative in 2017 to drive collaboration with partners across the public and private sectors to commercially scale AI, while protecting privacy and implementing ethics by design. The initiative works with AI leads from 27 mobile operators accounting for more than two billion connections in 100+ countries and a network of global thought leaders from UN agencies, governments, international organisations, academia and private sector. The GSMA, which facilitates collaboration and knowledge sharing to build a sound policy and regulatory environment, has been invited to support various regional and global AI initiatives and coalitions (OECD, UNESCO, AI Verify Foundation).

Who do I involve when setting up an AI for Impact project?

To successfully implement an AI for Impact project, it’s crucial to identify the relevant stakeholders at the outset, both on the demand-side and the supply-side, and involve them throughout. Each use case will likely have specific internal and external stakeholders and will need to be implemented by a cross-functional team, rather than in isolation by a single department. It will be necessary to draw on the expertise of network engineers, architects and technicians, as well as privacy and security experts.

You can find baseline descriptions of the main stakeholders and their functions in the ‘Ecosystem’ section of the toolkit.

Are mobile operators open to public-private partnerships?

Mobile operators are looking to build economically sustainable public-private partnerships with national and local governments that can tap the potential of AI and other forms of data analytics while raising awareness and understanding.

Mobile operators are also working with AI researchers, academia and start-ups to drive innovation while encouraging policymakers to facilitate this kind of collaboration by making their country an attractive place for AI talent.

What do you mean by sustainable? What is an economically viable business model?

‘Sustainable’ means that an initiative is economically viable; each party involved in delivering a solution has a means of funding the resources needed to support the work, and an incentive to invest in the long-term future of the solution.

3. Technical Considerations

What are the key lifecycle stages of an AI system?

Typically, the lifecycle of an AI system has three key phases: design (including product conception and data selection); development (engineering and validation); and deployment (ongoing use and monitoring). Organisations using AI systems should have mechanisms to identify and to mitigate the risks posed by AI at each of these stages, supplemented by escalation channels through which stakeholders can raise any concerns.

As with AI definitions, there is no universal agreement on the key stages of an AI system. Different examples can be found within the Singaporean Model AI Governance Framework, the US NIST AI Risk Management Framework and OECD’s AI in Society, for example.

How do you recommend approaching a mobile big data/AI project?

The GSMA has developed a six-phase engagement process as a guide for anyone wishing to set up an AI for Impact project. This process is designed to drive collaboration amongst the local stakeholders and to successfully deliver scalable, sustainable and replicable MBD analytics products and services. For detailed information about this engagement process and practical experience gained from applying it during the Covid-19 pandemic read the report Utilising mobile big data and AI to benefit society.

Another useful resource is the baseline description for an AI4I implementation, which can be found here. This provides a checklist of activities to include within an implementation project.

How important is data to the quality of AI systems?

AI systems are only as good as their data. A fundamental requirement for responsible and effective AI is therefore high-quality, complete, accurate and well-managed data. Organisations should ensure they know where data comes from, and that they can explain this provenance if needed, as AI systems based on incomplete or biased data can produce inaccurate outcomes that can lead to discrimination and other infringements of people’s fundamental rights[1].

There are several different mobile operators in my country. Do I need all of their data in order to make the mobile data representative?

In most countries, mobile penetration is very high and many operators serve a significant proportion of the population. This market share generally translates into a larger and richer sample than you would obtain from using traditional means of data gathering. When developing your solution, it is important to work with the mobile operator to understand their customer base and ensure that the most accurate insights are generated.

How do I know that the data is not biased?

Bias may be present to some degree in a mobile data set, for several reasons, including the fact that mobile network coverage and customers may not be uniformly distributed across a country. Bias may also be present in non-mobile data sets, for a variety of reasons, such as the sampling method, or the difficulty in capturing data for certain demographics and geographies. It is important in the design and planning stages to quantify the potential biases, understand their impacts, and correct for them, where possible. The best way to understand any biases in mobile data is to ensure that you work closely with the relevant mobile operator and benefit from its knowledge of its customer base and how this will influence the subsequent insights derived from the mobile data.

Do mobile phone companies have the expertise to understand how their data can be used for development purposes?

A key element of building a sustainable AI and big data solution is ensuring that all parties in the ecosystem are involved at the right time, in the right capacity. Mobile operators can process and configure mobile data using their expertise in network operations and their knowledge of their network and customer base. These crucial steps enable big data to be translated into actionable insights that will enable you to make well-informed decisions. Operators may also have knowledge of previous, successful applications of mobile data in other cases. Working collaboratively will lead to the effective employment of the available expertise on both the supply-side and the demand-side to find the development applications with the most potential. Collaborative working was a key success factor in the examples in the Covid-19 Case Studies

Can mobile big data (MBD) replace other forms of data?

The value of MBD analytics is in its ability to give us a rich and dynamic understanding of the world – mobile networks are continually capturing and generating data that can be used to track trends over time. Mobile big data complements existing data sources and can bring to light otherwise invisible phenomena, enabling you to make more robust, evidence-based decisions.

Do you need a data specialist to undertake a big data project with a mobile operator?

You don’t necessarily require a data scientist with implementation expertise, as the process can be managed by the mobile operator. However, a foundational understanding of how data works will enable you to make the most of a big data implementation. Of course, it is critical to have statistical knowledge of your domain and relevant data that your own organisation holds. It is also important to work with the operator to make sure any specific considerations are taken into account.

[1] Source:

4. Policy and Regulation

How should policymakers and regulators define AI?

Any legal or regulatory definition of AI needs to be clear and technically accurate. As the technology and the environment evolve, this definition can be revisited and adapted. At the same time, an overly-broad or open-ended definition risks extending the scope of any AI-specific regulation beyond what is needed, creating a disproportionate burden for developers of the technology.

Globally, there are multiple AI definitions, including those from the OECD, the EU AI Act and the Singaporean Model AI Governance Framework. However, there is some convergence in these regional approaches, which are increasingly using the OECD definition as their foundation. As AI evolves, these definitions are also likely to evolve.

How can policymakers and regulators optimise the value of AI and mobile big data (MBD) analytics?

As AI and MBD analytics become key enablers of sustainable economic growth and the delivery of vital services, policymakers and regulators need to invest in capacity building and encourage further innovation and investment in both the public and private sectors. They can do this by establishing clear principles and safeguards, and supporting the development of interoperable standards, while deploying anonymised big data analytics and AI to enhance key public services, such as healthcare, transport and the provision of energy.

Realising the potential of AI and MBD analytics to achieve public policy goals depends on the removal of any regulatory roadblocks and reducing regulatory uncertainty and friction. In particular, regulators should ensure that regulations and license obligations allow for aggregated and anonymised insights to be leveraged by mobile operators to develop products and services, while protecting and respecting privacy.

What do evolving AI policies and regulations mean for implementation projects?

As AI policies are evolving and regulations are being drafted, it is vital that organisations monitor the regulatory space in which they operate. Concrete guidance is not possible while regulations are still in flux. However, it is worth considering the role of risk in determining the level of compliance with regulations. In general, higher risk uses of AI can expect to be more heavily regulated.

Globally, there are different approaches to AI policy and regulations. While some regions are proposing recommendations and a principles-based approach, other regions are considering a risk-based approach.

Does the mobile industry support AI-specific regulation?

The mobile industry supports a recommendations-based approach. Any AI-specific regulation should be underpinned by a clear and technically-solid definition of AI. An overly-broad or open-ended definition risks creating a disproportionate burden for developers of technology that is not actually AI. Importantly, regulatory requirements should be tiered based on the risks associated with the AI use case, rather than the sector. While telco infrastructure is clearly critical and is already regulated as such, telecoms-specific AI use cases are not high risk.

In cases where new AI-specific policies and regulations are under development, providers and users of AI need legal certainty and predictability on the techniques and definitions in scope. The division of responsibility should be balanced and maintained, while obligations should be clear, proportionate and consistent. Such regulation should also account for existing sectorial laws that also apply to AI, to avoid a conflict of requirements. Ideally, regulations and definitions will be harmonised across international borders to enable businesses developing AI and their customers to benefit from economies of scale.

Can mobile operators with multinational footprints apply AI across borders?

As digital applications, AI systems are deployed and harnessed across international borders. Therefore, governments should seek to standardise regulation internationally through a multi-stakeholder approach, on the basis of shared principles.

Does the mobile industry support regulatory sandboxes?

Yes. Time-limited regulatory sandboxes can enable innovators to trial new products, services and business models in a real-world environment without being constrained by the normal regulatory rules. The GSMA believes that industry would benefit from national AI strategies that allow for the establishment of regulatory sandboxes.

In a broader sense, data privacy sandboxes exist in Singapore, through the Personal Data Protection Commission’s Data Collaborative Programme, for example. The Global Financial Innovation Network (in which Hong Kong and Singapore financial authorities participate) has announced the launch of a (cross-border) innovation sandbox for the financial services industry. The ASEAN-GSMA Regulatory Pilot Space for Cross-Border Data Flows can help inform a model for cross-border regulatory sandboxes for data. These sandboxes, which can be applied horizontally, are not specific to the telecoms sector.

What steps are mobile operators taking to ensure responsible data practices?

The GSMA and mobile industry are committed to advancing responsible data practices. In addition to the GSMA’s Mobile Privacy Principles, the GSMA has developed a Mobile Privacy and Big Data Analytics paper.

Mobile operators are well placed to understand the potential risks to individuals and groups from big data analytics and can implement measures to avoid or mitigate those risks. While aggregated and anonymised data can be safely shared with governments without compromising individuals’ privacy, mobile network operators can add value by leveraging their expertise to provide final insights. When this data is enriched with third-party data sources, it can help public agencies make evidence-based decisions.

5. Security, privacy and transparency

What are the implications of AI for privacy and security?

The harvesting of significant volumes of data for AI purposes can raise concerns about individuals’ privacy and the potential for misuse or unauthorised access to the AI platform.

Mobile devices generate personal data such as location information, user browsing history, and user communication records. Some generated data is not personal but can become so if it is associated with a particular individual. As with any planned analysis or large-scale use of personally-identifiable data, organisations undertake privacy impact assessments and apply privacy enhancing controls or technologies to mitigate against identified privacy risks.

To build trustworthiness in AI activities and reduce the risks of cyber security threats and data breaches, risk assessments and baseline security controls can be introduced that limit access to authorised users. Authentication methods and use of encryption can be used to secure data at rest and in transit, and limiting storage times also helps to reduce exposure.

What steps should be taken to ensure transparency for users?

The use of AI technology, in whatever context, needs to be both responsible and transparent, supported by information notices, such as privacy statements, setting out the intended purposes for processing personally-identifiable data and providing detail on where more information can be found. While precise requirements might differ depending on the regulation in place, solution developers should follow a privacy-by-design approach.

Can my agency/organisation have access to the raw mobile data?

No. In order to protect the privacy of individuals, access to raw data should be limited and controlled. Operators are under legal and regulatory obligations to protect their data and should ensure that data used for the MBD projects are non-identifiable. Limiting access and using privacy-enhancing controls, such as pseudonymisation, also reduces the risk of personal data breaches. Furthermore, operators possess the expertise and understanding of their own systems to aggregate and analyse data accurately, while safeguarding the rights of their users.

Do I need multiple agreements when I include third parties?

Contractual agreements are required to provide governance mechanisms for the data processing and handling in MBD projects. When collaborating with several organisations you must ensure that any privacy obligations are set out clearly, and each party knows what they will be responsible for and what they expect of each other. You may be accountable for making sure that privacy obligations also apply to third parties.

Do I need to involve telecoms regulators?

Laws and regulations will vary by country and mobile phone operators may be subject to licensing conditions and sector specific regulations. All stakeholders in the ecosystem should have a good understanding of the legal and regulatory landscape of the country where they are implementing their AI and mobile big data solution. Once you have that understanding, the next step is to assess whether regulators or other government officials in specific departments need to be notified on a case-by-case basis.

How should organisations manage security and prevent the malicious use of AI technology?

Whether internally developed or sourced from outside the organisation, AI systems can be susceptible to known and unknown security vulnerabilities. Malicious actors may attempt to exploit AI algorithms or manipulate the underlying data to deceive or compromise AI-based security measures within the control of mobile operators and service providers. In addition, adversarial attacks can trick AI systems, such as facial recognition validation processes, by presenting altered or synthetic data, to gain entry into restricted systems.

An important part of preventing a sophisticated malicious attack is to implement comprehensive security measures to protect the mobile network infrastructure from unauthorised access, malware, and other cyber threats. This includes firewalls, intrusion detection systems, and conducting regular security audits to identify and address vulnerabilities. As with all security matters, it is important not to forget the human factor: staff at all levels of the organisation should receive ongoing regular training on security controls and how to identify and report suspicious activity.

AI itself can be used to bolster security measures in the mobile industry. For example, AI systems can identify and mitigate potential threats, detect security anomalies in user behaviour and enhance real-time monitoring for suspicious activities. This can help prevent security breaches and protect user data from malicious attacks.

6. Responsible AI

What are the fundamental requirements for ethical AI?

AI needs to be designed, developed and deployed in a responsible and ethical way that is human-centric and rights-oriented. As an increasingly essential element of the infrastructure on which our society is built, AI needs to be fair, open, transparent and explainable. The mobile industry is committed to the ethical use of AI in its operations and customer interactions; to protect customers and employees, remove any entrenched inequality and ensure that AI operates reliably and fairly for all stakeholders.

For further reading on AI ethics principles, and recommendations on ethical AI implementation see: The AI Ethics Playbook[1] and a related self-assessment questionnaire[2].

What are the key ethical AI principles?

To act ethically, organisations require a guiding framework that explains what good ethical behaviour looks like. As AI has developed, and the potential risks of AI have become clearer, many organisations have written AI ethical principles for this purpose. The GSMA’s AI Ethics Playbook explores some of these principles – fairness, human agency and oversight, privacy and security, safety and robustness, transparency and explainability, accountability – and with full consideration of the potential environmental impact.

Find more about AI ethics principles here.

What measures are mobile operators implementing to ensure their use of AI is ethical?

Orange has established a Data and Artificial Intelligence Ethical Charter that enshrines six key principles, including respect for human autonomy and needs, and equity, diversity and non-discrimination. The implementation of the Charter is monitored by Orange’s AI Ethics Council. Orange has also established in-country local AI ethics referents to adapt methodologies and tools and support implementation.

stc takes an ethical approach to implementing AI use cases. For example, its start-up incubator employs ‘explainable AI’ to make data-driven decisions when scoring and shortlisting start-ups and applications. As well as making the model (including its workflow and the variables considered) transparent, the incubator explains why the model predicts that a particular start-up will be successful. This approach provides accountability in the decision-making process and allows the incubator to make informed decisions based on data, while also providing the start-ups with insights into how they were evaluated and what factors were considered.

Telefónica’s ‘responsible use of AI by design’ methodology encompasses AI principles, awareness and training for employees, a questionnaire, technical tools and a governance model that defines roles and responsibilities. It has identified a new role called the ‘responsible AI champion’ who is the go-to person for questions related to the ethical use of AI. As part of its governance model, Telefónica has created an AI ethics committee consisting of multidisciplinary experts. It has also agreed to develop joint initiatives to promote, foster and implement UNESCO’s Recommendation on the Ethics of Artificial Intelligence (AI)[3].

Telstra has established a Risk Council for AI and Data, which considers the broader human, societal and environmental impacts of AI systems and the decisions they make, along with reviews to check that they accord with the law in all the jurisdictions in which they operate. This applies to AI developed in-house and AI purchased from third parties. Telstra also ensures these purchased AI technologies are working in line with its ethical principles.

Vodafone has adopted an ‘AI ethics-by-design’ approach, which employs internal controls to govern the end-to-end use of AI. For example, anyone developing an AI-related service needs to carry out a risk assessment to identify use cases that require additional supervision to ensure fairness and avoid unfair preferential treatment. They can draw on a use case library for transparency and best practice templates to ensure their documentation and logging adheres to the required structure and contains the relevant content for auditing.

[1] Source:

[2] Source :

[3] Source:

Use cases

How can AI be used to improve business operations within the telco industry?

In the telecoms industry, AI is having a profound impact. It is enabling mobile operators to improve both connectivity and their customers’ experience. By using AI to optimise and automate networks, mobile operators can increase efficiency, lower energy usage, provide better services and enable more people to become connected. For example, AI can be used for real-time network monitoring, predictive maintenance and to bolster network security, thereby providing customers with better connectivity.

AI systems can also strengthen and enable personalised and meaningful interactions with customers. For example, they can be used to improve automated communications, virtual assistance, customised pricing and technical support. In the security sphere, AI systems can help to detect and prevent fraud, fend off cyber attacks and counter illegal robocalling.

Find further examples here.

How can AI support the UN Sustainable Development Goals (SDGs)?

Mobile operators can provide governments and public agencies with the AI solutions and big data analytics they need to tackle a wide range of problems. Operators can deliver valuable insights that can help to address pressing policy challenges, such as climate change and pollution, the need for better healthcare and transport, and sustainable development, while also responding effectively to extreme weather, natural disasters and epidemics.

Some mobile operators already provide AI capabilities to third parties on a commercial basis. They may deliver AI as a platform capability or they may employ AI to process mobile network data for analytics for third-party organisations, such as governments, traffic planning authorities, energy providers and other commercial organisations.

Find further examples here and here.

What role did mobile operators play in combatting Covid-19?

In response to the pandemic, mobile operators worked with governments and international agencies in at least 40 countries to use mobile big data (MBD) to better understand and respond to the virus. With the support of the UK Foreign, Commonwealth and Development Office, the GSMA responded to requests from 14 low and middle income countries (LMICs), and supported the development of MBD products in the Democratic Republic of Congo, Benin, Rwanda and Burkina Faso.

For further reading, please see 'Utilising mobile big data and AI to benefit society: Insights from the Covid-19 response.'

You might also be interested in a Cambridge University Press Special Collection on mobile data analytics to inform the Covid-19 response.

Has the GSMA’s AI4I initiative worked on Covid-19 contact tracing apps?

No. The AI4I initiative supported a wide variety of projects during the Covid-19 pandemic. These used aggregated, anonymised data to create population-level insights whilst upholding the standards laid out in the Covid-19 Privacy Guidelines.