Navigating regulatory frameworks and ethical considerations in artificial intelligence-augmented and cloud-driven telecom systems
Synopsis
Today, artificial intelligence (AI) is increasingly dominating the services delivered by telecommunication systems, and its importance cannot be understated. AI enables machines and systems to perform tasks that were once reserved for human experts, and it is growing to revolutionize the delivery of telecom services and is expected to create a business value of approximately $1.8 trillion by 2025. It has the potential to drive a 40% operating income in the telecom sector, a significant increase from previous decades. Over the years, we have seen that the development of AI has essentially driven a strategic competitive advantage in the telecom sector. AI includes a wide range of techniques that encapsulate both supervised and unsupervised learning, practical deep learning, reinforcement learning, natural language processing, and generation and reasoning skills, among others. Consequently, this technology allows telecom providers to fully automate tasks, such as testing, monitoring, and self-recovery to enhance operational efficiency and agility to offer better services. In this context, testing and monitoring in the telecom sector are two highly pertinent areas in which AI can maximize its impact by providing valuable insights and alerts to operational networks and critical national infrastructure-related incidents (Catalano & Tan, 2018; Dubey & Kim, 2019; Lee & Kim, 2020).
Understanding the regulatory frameworks for AI in telecom services and their broader ethical implications has become both highly relevant and challenging. The possibility of deploying AI-augmented adaptive systems as a service, supported by a cloud infrastructure, characterized by a complex service-chaining architecture, raises many important ethical concerns and operational needs. As of now, these calls remain largely unexplored. This is exemplified within any AI and cloud-driven infrastructure for telecommunications that can operate essentially as a ‘black box’ with its internal operation opaque to both users and customers. Consequently, the operation of these systems requires a much higher level of awareness and control from both the service provider and network operators to ensure that these services can be delivered in a non-discriminatory and controlled manner. As a result, there is both a commercial imperative to develop such ethical models and a regulatory challenge to develop comprehensive and practical AI models for testing and monitoring diverse applications.