Navigating AI Integration: Addressing Unique Risks and Governance in the Telecommunications Industry
Introduction
Organizations across industries are grappling with the integration of artificial intelligence (AI) into their operations. The telecommunications industry in particular is looking to scale beyond experimentation, leveraging AI to enhance customer service, optimize network operations and drive innovation. However, the proliferation of AI introduces new risks beyond traditional data privacy and security concerns that demand a reevaluation of risk management strategies and governance structures.
AI risks — including questions of bias and transparency but also the potential for adversarial attacks manipulating AIs to perform malicious actions — are multifaceted and can affect a company's security, reputation, competitiveness and compliance with emerging regulations.
In this white paper, we will explore some unique risks associated with AI, the regulatory landscape that is beginning to take shape in response to these risks, and the governance and capabilities that organizations must develop to navigate this new terrain effectively. Our focus will be on providing actionable insights that can help companies, particularly those in the telecommunications sector, to safely and effectively harness the power of AI.
Context: The Ubiquity of AI Across the Organization
Typical focus for AI governance teams has been proprietary models and, increasingly, the incorporation of as-a-service AI models (e.g. ChapGPT, IBM Watson) into in-house systems. However, there are many other areas where AI is embedded, with implications for security, privacy and compliance that need to be considered, including cloud platforms, software, third-party vendors, devices, and emerging productivity platforms. Understanding how AI has been embedded within these technologies, how it impacts your organization, and what policies and protections are needed to manage AI is critical to effectively incorporating this tool into your business. It is imperative to address these areas head-on, to ensure that AI serves as a value driver rather than a vector for vulnerability.
Beyond Data Privacy: Novel Risks Presented by AI
Although AI’s scaled consumption of data highlights existing data privacy and security concerns, it also presents new considerations for organizations that are inherent in the way AI technology functions.
Security Risks
- Devices embedded with AI expand the attack surface for cyber threats, necessitating advanced security protocols to avoid unauthorized access.
- Evolving capabilities such as 5G APIs and AI agents also contribute to an expanding attack surface and could emerge as potential risks.
- AI systems can be vulnerable to adversarial attacks, such as prompt injection, where malicious inputs manipulate AI behavior.
- Example: Hackers manipulate AI-powered network traffic analysis tools to obfuscate malicious activity, enabling data breaches or service disruptions.
Competitiveness Risks
- AI requires regular and ongoing investment across all layers of the AI architecture for it to remain a competitive advantage and avoid falling behind competitors who harness AI more effectively and securely.
- AI models trained primarily on historical data may struggle with rapidly changing conditions, potentially leading to suboptimal outcomes.
- Example: AI systems for network traffic prediction and capacity planning might fail to account for sudden, unprecedented events (like natural disasters), leading to network congestion and service quality issues.
Regulatory and Compliance Risks
- The rapidly evolving regulatory changes related to AI add strain to existing governance and compliance programs trying to keep up in order to avoid potential penalties and minimize reputational risk.
- Further, global organizations must navigate a complex landscape of compliance requirements, which can vary by region and industry. Noncompliance risks legal penalties and reputation damage.
- Example: AI-powered call monitoring systems improperly make CPNI data available for marketing purposes against customers’ elections.
Interoperability and Integration Risks
- The implementations of legacy telco systems have created large and often incompatible silos of information, requiring rethinking of the enterprise’s data architecture for effective AI integration.
- Adoption of an agentic architecture introduces new challenges, including the need to rethink “connectors” for seamless operation.
- Example: An AI-based fraud detection system fails to integrate smoothly with billing and network traffic systems, delaying fraud identification.
Customer Trust Risks
- AI in customer-facing services can raise concerns about data privacy, transparency and the perceived loss of human touch.
- Poorly implemented AI systems can degrade user experience with irrelevant or incorrect recommendations.
- Example: AI-powered personalized pricing models perceived as unfair or discriminatory could erode trust and invite regulatory scrutiny.
How You Can Manage AI Risks
This section outlines some potential, near-term strategies to manage AI risk based on recognized frameworks and market best practices:
- Know Where AI Exists in Your Enterprise: Develop a clear map and inventory of where AI is deployed across the organization to identify and understand risks.
- Define Limits of AI: Leverage existing frameworks such as the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework as a foundational guide.
- Create Acceptable Use AI Policies: Update privacy and device usage policies and develop new, high-level policies related to AI that align with company strategies and specific risks posed by AI.
- Rules of Engagement for Third-Party Use of AI: Incorporate AI-specific activities within existing procurement processes, including an understanding of vendors’ own AI governance approach and capabilities.
- Evolve Employee Education to Include Use of AI: Provide training to employees and vendors regarding AI risks and company policies to engender a culture of risk management.
- Create Robust AI Governance: Establish process to regularly review and update governance process, risk management framework, privacy and device usage policies, etc., to remain current with rapid changes in AI capabilities.
Conclusion: How A&M Can Help
A&M’s global teams of AI, technology, security, risk management, and industry experts can assist whether you are just starting your AI deployment journey or are looking to secure and enhance existing programs. Here are some specific areas where we engage:
- Developing an enterprise strategy and governance approach to guide AI development and deployment
- Performing in-depth AI risk, data privacy, and security or cybersecurity assessments and mitigation plans
- Developing enhanced governance, risk management and procurement processes and systems
- System and/or vendor selection and deployment management, providing objective advisory and execution support
We encourage organizations to embrace the strategic use of AI and to balance potential risks against significant efficiency, revenue or other benefits. Those organizations that develop capabilities to safely and effectively use powerful AI systems are poised to thrive in the coming years.