Skip navigation EPAM
CONTACT US

05

Governance & Responsible AI

AI REPORT 2025

05

Governance & Responsible AI

AI REPORT 2025

05

Governance & Responsible AI

bg

INTRODUCTION

Is Governance Slowing or Speeding Up AI Potential?

Governance, when properly implemented, is crucial for addressing and mitigating the challenges associated with the use of AI across an organization. Effective governance ensures that AI is used responsibly and in alignment with the company's goals and values.

Have companies reached full efficacy? And how is it impacting their views on responsible AI?

bg

THE RESULTS

Enterprises Are a Long Way from Reaching Governance Maturity 

75%

of advanced companies agree or strongly agree that their leadership has a clearly defined AI strategy, compared to 70% of disruptors.  

4%

of disruptors, on the other hand, say they have a fully fleshed governance plan, compared to 1% of the rest of our participants. 

63%

of disruptors already have a C-suite level role dedicated to AI. Yet, AI ethics and governance-related roles were the second least in-demand roles, following AI product managers. 

bg

QUESTION

How is your organization approaching AI in light of impending regulations?

bg

18

Months

Average time estimated across companies at all levels to roll out an effective AI governance model.

18

Months

Average time estimated across companies at all levels to roll out an effective AI governance model.

bg

WHAT THEY MEAN

Organizations Know They Are Behind on Governance, Yet They Are Not Letting It Slow Progress … 

While companies agree that their leadership has a strong strategy in place, they understand that their AI governance is not where it should be — and it won’t get there for a while.

With just 1% of all companies saying that they have a fully effective governance model, some executives are turning toward outside expertise for help, bringing Chief Artificial Intelligence Officers (CAIOs) on board. In fact, 63% of disruptors have a dedicated CAIO to oversee and guide their AI strategies. Among companies that don’t have this specific role, many combine AI oversight with other leadership positions, such as Chief Data Officer (33%) or Chief Information Officer (33%). This highlights the importance of AI governance at the highest levels of the organization, ensuring that AI initiatives align with broader business strategies. 

Despite this, few plan to invest in creating AI ethics and governance roles, favoring instead machine learning engineers and AI researchers — even with the understanding that that their current approach toward implementing governance is likely to take a year and a half.

WHAT THEY MEAN

Organizations Know They Are Behind on Governance, Yet They Are Not Letting It Slow Progress … 

While companies agree that their leadership has a strong strategy in place, they understand that their AI governance is not where it should be — and it won’t get there for a while.

With just 1% of all companies saying that they have a fully effective governance model, some executives are turning toward outside expertise for help, bringing Chief Artificial Intelligence Officers (CAIOs) on board. In fact, 63% of disruptors have a dedicated CAIO to oversee and guide their AI strategies. Among companies that don’t have this specific role, many combine AI oversight with other leadership positions, such as Chief Data Officer (33%) or Chief Information Officer (33%). This highlights the importance of AI governance at the highest levels of the organization, ensuring that AI initiatives align with broader business strategies. 

Despite this, few plan to invest in creating AI ethics and governance roles, favoring instead machine learning engineers and AI researchers — even with the understanding that that their current approach toward implementing governance is likely to take a year and a half.

… And They Are Better Equipped to Handle New Regulations 

No less significant an issue is compliance due to the tsunami of regulations from countries, regions, states and even city governments. 

The most comprehensive legislation, to date, is the European AI Act, a risk-based framework that defines four levels. “Unacceptable” uses of AI include those that target children and/or which are a “clear threat” to human safety. “High risk” applications are those deployed in critical infrastructure, healthcare and others. Uses of AI in these risky scenarios are subject to audit, outside review and other obligations. Non-compliance of the prohibited AI practices under the AI Act can lead to administrative fines up to 35,000,000 EUR or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher. Such companies will no doubt also suffer substantial reputational damage.

But the AI Act is just the tip of the iceberg. Other countries are busily formulating, or have enacted, similar regulations. And AI-enabled applications will of course be subject to existing laws and regulations such as the General Data Privacy Regulation (GDPR), among others. 

The data shows that top-performing companies are better equipped to navigate these AI-related regulatory hurdles. They are more likely to continue implementing their AI programs while remaining vigilant of new regulations. Being proactive in addressing these challenges is key to staying competitive and compliant in an AI-driven world.

… And They Are Better Equipped to Handle New Regulations 

No less significant an issue is compliance due to the tsunami of regulations from countries, regions, states and even city governments. 

The most comprehensive legislation, to date, is the European AI Act, a risk-based framework that defines four levels. “Unacceptable” uses of AI include those that target children and/or which are a “clear threat” to human safety. “High risk” applications are those deployed in critical infrastructure, healthcare and others. Uses of AI in these risky scenarios are subject to audit, outside review and other obligations. Violators of the AI Act can be fined up to 6% of annual revenue and will no doubt suffer substantial reputational damage as well.

But the AI Act is just the tip of the iceberg. Other countries are busily formulating, or have enacted, similar regulations. And AI-enabled applications will of course be subject to existing laws and regulations such as the General Data Privacy Regulation (GDPR), among others. 

The data shows that top-performing companies are better equipped to navigate these AI-related regulatory hurdles. They are more likely to continue implementing their AI programs while remaining vigilant of new regulations. Being proactive in addressing these challenges is key to staying competitive and compliant in an AI-driven world.

bg

HEAR FROM EPAM LEADERS

“At EPAM, we recognize that true innovation thrives when guided by a strong ethical and governance foundation.  We focus on developing AI-related solutions, both for our own use and for our clients, that incorporate evolving governance and compliance considerations while also delivering transformative business value.”

Ed Rockwell
General Counsel & Secretary, SVP, EPAM

WHAT YOU CAN DO ABOUT IT

Invest in New Roles to Speed Up Governance with an Eye on Regulation

Establish an AI Governance Body

Define Leadership Roles for AI Governance

Develop Clear AI Policies

Invest in Training & Awareness

Monitor & Adapt

HOW WE DO IT

Partnering with a Global CPG Leader to Advance Responsible AI Adoption 

A leading global CPG company collaborated with EPAM to implement a Responsible AI governance process, ensuring safe and ethical use of AI/ML technologies across its operations. Leveraging EPAM’s Responsible AI Framework, the partnership introduced standardized procedures for risk assessment and oversight while launching a comprehensive training program for employees. This initiative is enabling the company to scale GenAI solutions safely while driving impactful innovation.

HOW WE DO IT

Ensuring Regulatory Compliance for an AI-Enabled Medical Imaging Solution

EPAM partnered with a global pharmaceutical company to assess and enhance their AI-enabled medical imaging solution. EPAM analyzed the data workflows, audited the medical imaging software, identified compliance gaps and steps for remediation. With strategies tailored to their needs, the client walked away with a clear compliance vision and ongoing risk mitigation strategies.

HOW WE DO IT

Reviewing a Consumer-Facing AI System for Compliance 

EPAM worked with a global retailer to develop an AI-powered solution to enhance online shopping by summarizing user-generated product reviews with large-scale sentiment analysis and advanced NLP techniques. The system includes robust guardrails to filter restricted content, account for FTC compliance and address ethical concerns. This innovation improved customer decision-making and boosted revenue, showcasing our commitment to using AI responsibly.

01 / 03
Next bg
next mob bg