Preparing for the AI Act: Implications & Essential Steps to Future Proof the TME Industry
With the “sudden” emergence of ChatGPT, artificial intelligence (AI) is in every conversation. But with new regulations and legislation in many jurisdictions, companies in every industry need to be up to date on the laws and their consequences that could affect their business. The exponential increase in scenarios where AI in now applicable also increases the possibilities of unintended consequences and potential undue harm that could result from AI products not being created, designed or deployed responsibly and ethically. The EU’s proposed AI Act (AIA) aims to create laws for the development and use of AI in the European Union that includes strict rules and requirements for both developers and users of AI. This will mimic the earlier deployment of GDPR.
While the AIA has been provisionally agreed upon by the EU Commission and is expected to take affect by 2026, companies MUST begin assessing and modifying their AI systems and products now to prepare for the new regulation. But what will the AIA mean for the telecommunications, media and entertainment (TME) industry and how should TME companies be prepared? Let’s dive in…
How AI is Transforming the TME Industry
AI is becoming (and in many cases, already is) an integral component of TME business models, and the use cases are increasing daily.
Many media and entertainment companies have built their entire operations and offerings on AI capabilities — from capturing and leveraging customer information to curate and deliver personalized content, to enhancing decision making, to using it for marketing and production services. As a result, AI has significantly transformed media companies by improving efficiency and enhancing the customer experience.
Similarly, AI is becoming ubiquitous across telecommunications companies in areas like network optimization, quality control and customer services. AI presents break-through improvements for predictive maintenance, network security and fraud detection, improving not only a telecom’s own network functionality but also a better and safer customer experience.
Today, in TME, AI is table stakes. With the transformative potential of AI in the industry, TME companies need to reassess their products and platforms under the proposed regulations. A good first place to start is by determining the level of risk category that applies to their AI solutions.
The AI Act Risk Categories
The AIA employs a risk-based strategy centered on risk identification, mitigation and elimination. To remain compliant, companies must know which of the four risk categories their AI systems fall into under the proposed act. The risk categories include:
- Unacceptable risk is the highest risk category, referring to those prohibited AI practices which will be completely banned in the EU as they pose immediate threats to individuals and society. Prohibited practices include the use of real-time biometric systems and protected personal information, social scoring algorithms and manipulative systems. Moreover, any AI application that interferes with decision making, manipulates physical or psychological behavior, endangers privacy, data protection, public health or the environment, or can be used for unlawful purposes, will be prohibited under the new law.
- High-risk AI is likely to constitute the majority of AI systems currently in practice. These systems have the potential to cause significant harm to individuals or groups of individuals. Examples of AI use cases that could be relevant to TME include biometric identification, categorization systems and critical infrastructure systems (such as safety-related utilities, employment-related systems, safety components in products, healthcare applications and content moderation systems). Additionally, an AI system that pools, utilizes and forecasts large amounts of data, specifically personal data, could be scrutinized as a high-risk system.
- Low-risk AI systems do not directly pose harm onto individuals or society but still require transparency and accountability mechanisms to ensure they remain at this risk level. AI within this category includes those used in customer service or chatbots, which TME companies regularly utilize. Companies must provide clear information and documentation about the AI system's capabilities and limitations, and ensure that it operates in a continuously fair and transparent manner.
- Minimal risk is the lowest risk category and details those AI systems that do not fall into the AIA. However, companies that use minimal risk AI systems must still ensure that they are compliant with the new regulation. Low-risk AI includes those used in gaming and image recognition, which are predominantly used by media and entertainment companies throughout their business models and offerings.
Prepare Now or Pay Later
TME companies must adopt a proactive approach to categorize their AI systems by risk level and adapt their AI systems or they may suffer from penalties. To leverage the power of AI and reap the business benefits, there are three essential actions that TME companies MUST take now to be prepared:
- Conduct a risk assessment of individual AI applications to identify which risk category they fall into under the AIA. A thorough risk assessment will identify potential risks caused by AI and define which AI systems are prohibited, need to be adapted, require documentation and transparency mechanisms, or are acceptable to use in their current state in the EU market. In doing so, TME companies can better understand how to mitigate and eliminate potential risks, ensuring that they are compliant with the guidelines proposed in the AIA. A third-party risk assessment can provide a comprehensive review of AI systems against the new AIA, providing expert insights and recommendations without overwhelming a company’s development team.
- Develop a compliance plan to ensure AI systems meet the new regulatory requirements of the AIA. The compliance plan should include a list of the high-risk AI systems and a transition plan on how the required changes will be addressed and how they will impact end users. The plan should also include recommendations for low-risk systems that require mitigation, such as gaining additional and explicit consent from end users or deprecating services that are not in use. Developing an effective compliance plan is an iterative process that may take multiple months and will need to be updated regularly to ensure continued compliance with the AIA and other regulations in the future.
- Increase transparency of AI systems and decision-making processes by providing evidence of AI systems' risk categorization level and demonstrating that high-risk applications are not in use. TME companies should also disclose how AI algorithms are being used to achieve business goals and explain the decision-making process of the AI as well as the humans who are responsible for creating these systems. Additionally, companies need to be transparent with their customers about the information they collect, how they use it and how they distribute it. Human oversight is crucial to enhance transparency, increase accountability and ensure compliance with regulations, particularly in high-risk applications. Failure to be transparent can lead to internal distrust, customer dissatisfaction and regulatory non-compliance.
Considering the disruptive potential of the AIA, companies MUST follow these essential steps, re-evaluate their systems and re-envision their business models today to better prepare for the coming changes.