Companies worldwide are pursuing artificial intelligence (AI)’s promises of increased efficiency, productivity, and ultimately, revenue. In the Asia Pacific region alone, many companies are already deploying Gen AI in actual production environments.
In doing so, however, they are exposing themselves to evolving AI-related risks.
In a virtual media briefing conducted on Wednesday, Feb. 26, executives from technology firm Accenture discussed its recent survey on organizations’ AI-related risk readiness.
A major part of the study focused on how to protect against AI’s potential dangers by responsibly designing, deploying, and using AI to create value and build trust.
“To effectively scale AI, particularly generative AI (gen AI) and agentic AI, businesses need to invest in building trust among their people and their customers, ensure they have the right data foundation, and operationalize responsible AI. That’s the only way to create long term, sustainable value,” stated Ryoji Sekido, Accenture co-CEO for Asia Pacific and CEO of Asia Oceania.
The research revealed that while organizations across the globe — including those in Asia Pacific — are accelerating various types of AI adoption from gen AI to agentic AI, less than 1% have successfully implemented sufficient responsible AI measures in their operations to mitigate AI-related risks.
These risks span privacy and data governance, security, transparency and reliability, human interaction, and even environmental impact.
Additionally, as more governments start regulating AI, the study predicted that companies without responsible AI practices will be vulnerable to a non-compliance risk.
Seventy-seven percent of companies surveyed are either already facing AI regulation or anticipating its effect over the next five years.
Ninety percent also expect to be subject to AI adjacent legal obligations such as cybersecurity as well as data and consumer protection over the next five years.
Understanding risks and their repercussions
Companies, though, are not blind to this technology’s emerging threats. Organizations believe that they comprehend the gravity of these risks and the corresponding value of responsible AI.
In fact, the respondents estimate that a single major, AI-related incident would, on average, eliminate 24% of their firm’s market capitalization value.
Contrastingly, companies approximate that when they have fully mature, responsible AI practices, their AI-related revenue will increase by an average of 18%.
Asia Pacific companies echo this sentiment. Forty eight percent of Asia Pacific companies view responsible AI as a strategic tool for AI-related revenue growth and will be upping investments in responsible AI from 10% to 50% in the next two years.
Companies are not as prepared as they think
Despite the significance companies place on responsible AI, Accenture’s survey revealed that organizations perceive themselves to be more prepared for these threats than they actually are.
Accenture determined this perception gap by measuring respondents’ organizational maturity and operational maturity.

Organization maturity refers to the extent and effectiveness of an organization’s current responsible AI practices, while operational maturity measures the extent to which a company has adopted enough responsible AI measures to mitigate AI-related risks.
When organizational and operational maturity was viewed in aggregate, none of the surveyed companies had reached the final maturity stage, which indicates that all respondents are currently unable to implement responsible AI practices across their business’s applicable risk areas.
Notably, though, the research found that Asia Pacific organizations are ahead of the other regions in terms of organizational maturity.
Nineteen percent of Asia Pacific respondents were also reported to be on the right path in terms of both organizational maturity and operational maturity compared to 15% globally.
The study summarized that even if organizations globally have some responsible AI practices in place, “companies may still be underestimating the number of risks they are exposed to, the quantity of measures required and the completeness of how they are implemented.”
Strengthening AI risk-preparedness
Far from suggesting that companies slow down their AI implementations, Accenture is urging companies to prioritize responsible AI practices so they can smoothly adapt to emerging risks and regulations while confidently scaling their AI operationalization.
Vivek Luthra, Accenture’s Asia Pacific Data & AI lead, laid out the benefits of making responsible AI a priority.
“Resposible AI is done for three purposes. One, to create value. Second, it builds trust, which is advantageous because creating value can only happen if it’s built in a trusted way. Three, it protects because as you roll AI out, you need to ask how do you protect your organization, business, and stakeholders from potential AI risks,” Luthra said.

Accenture recommends companies concentrate on the following five priorities to nurture responsible AI practices in their organizations:
First, organizations must establish AI governance and principles. This priority entails developing a responsible AI strategy and roadmap that includes clear policies, guidelines and controls as well as implementing a robust AI governance operating model.
Second, companies must conduct systematic AI risk assessments. When crafting these assessments, Accenture further advised that organizations’ approaches to screening and classifying risks from AI use cases must be scalable across an AI tool’s lifecycle and value chain, which would help them better identify and respond to AI risks.
Similarly, the third priority requires companies to systematically and continuously test their AI systems. Ongoing testing ensures that risk mitigation measures are maintained effectively.
Next, companies must set up a dedicated AI monitoring and compliance function to ensure their AI models remain regulatory compliant, ethical and sustainable. This priority is particularly urgent for gen AI. Accenture reports that gen AI usually has less data and model transparency and more incidences of hallucinations, bias and IP or copyright breaches.
Lastly, companies must train their workforces as they can either be a source of risk or when properly trained, the first line of defense when it comes to risk mitigation. Properly training employees not only equates to teaching them how to use AI efficiently, but also how to use it in a way that reduces this technology’s risks for their companies.
In Asia Pacific’s case, responsibly scaling AI requires a particular emphasis on trust and improving data quality.
The study found that 78% of Chief Experience Officers (CXO) in Asia Pacific businesses say that realizing AI’s full potential depends on a strong foundation of trust externally and internally.
Companies in short must not only win the trust of their customers with their AI initiatives, but even their employees. Earning employee trust goes beyond offering upskilling opportunities as personnel must be assured they have job security in this AI era.
In addition, 30% of surveyed Asia Pacific business leaders cite data and tech limitations as one of the foremost barriers to safely scaling their AI implementations.
Data is the bedrock of an accurate, reliable, and effective AI model. Luthra recommended that Asia Pacific organizations improve their data quality by adopting modern Data Management practices like building Data Products that integrate platform, engineering, management, governance, quality standards, business, and data owners so that data becomes secure and self-describing.
The study titled “From Compliance to Confidence: Embracing a new mindset to advance responsible AI maturity” surveyed C-suite executives across 1,000 companies spanning varying industries in 22 countries. The survey was designed by Accenture in collaboration with Stanford University and was conducted in 2024.