Saturday, April 27, 2024

IBM calls on Asean markets to push responsible GenAI use to drive rapid adoption

Organizations across the Asean region are eagerly exploring the applications of generative artificial intelligence (GenAI). In fact, this technology is expected to be significant growth driver and the Asean GenAI market is predicted to experience a compound annual growth rate of more than 24.4 percent from 2023 to 2030.

However, technology giant IBM warned that the widespread adoption of this technology may be limited by distrust and the developing complexity of the regulatory landscape.

In virtual roundtable held on March 19, IBM discussed how the risks accompanying GenAI and anticipated regulatory consequences are decreasing trust in it, how balanced AI regulation by governments can promote Gen AI adoption, and how to achieve responsible AI use by organizations.

Catherine Lian, IBM Asean general manager and technology leader, kicked off the meeting by stressing that “It is very clear that organizations with the most advanced generative AI will have a competitive advantage…[though] without responsible AI and an AI governance framework, companies will not be able to adopt AI at scale.”

Stephen Braim, IBM Asia Pacific vice president for government and regulatory affairs, added: “If clients, companies or government don’t trust the AI, the underlying data, the way it’s put together and its built-in bias, the use of it will be lacking.”

Case in point, according to a 2023 study by the IBM Institute for Business Value, over 72 percent of executives are choosing to forgo the use of AI in their organizations due to concerns over AI ethics and safety.

These concerns are warranted as, like with many powerful tools, the benefits of GenAI are accompanied by dangers. For example, the massive amounts of data required to train GenAI models could infringe on intellectual property, data privacy, and data usage rights.

In addition, the lack of clarity surrounding how GenAI models arrive at a decision, biased answers, and the possibility for hallucinations prevents both businesses and consumers from relying on its outputs.

GenAI is even predicted to facilitate in the spread of disinformation by expediting the production of legitimate seeming write-ups and deepfakes.

Lastly, the impacts of noncompliance with technology-related regulations are only growing. In its 2023 Data Breach report, IBM recorded that Asean organizations lose an average of $3.05 million in revenue due to a single non-compliance event.

The regulatory landscape is also becoming more rigorous. In Asean, legislation, guidelines, and frameworks revolving around data privacy and the usage of AI have either been published or are in the works in the Philippines, Singapore, Indonesia, Malaysia, Vietnam, and Thailand.

As such, many businesses are wary of investing in this technology because the regulatory landscape and the repercussions for violating it are not yet set in stone.

IBM, however, maintains that both governments and organizations have a role to play when promoting the responsible use of GenAI to encourage its swift and early adoption.

IBM says the government’s role is to craft balanced regulations that ensure user safety and privacy, as well as create a space that encourages technological innovation. To do so, the technology company has a number of proposals.

First among them is to regulate AI risk, not AI algorithms. In other words, rather than regulating the technology itself, IBM advises that regulation must account for the situation in which AI is deployed and ensure that the high-risk uses of AI are regulated more strictly.

Second, IBM argues that regulations should establish that AI creators and deployers are not immune to liability by considering the different roles of AI creators and deployers and holding them accountable in the context in which they develop or deploy AI.

Lastly, IBM strongly recommends that governments should not embark on an AI licensing regime.  Not only could excessive licensing produce a form of regulatory capture, it could prevent organizations from maximizing the potential of GenAI for the good of the nation.

Notably, Braim commented during the briefing that the aforementioned Asean countries already developing their AI guidelines and frameworks are striking the right balance.

He found that their regulations feature a “light touch, risk-based approach” that still allow for businesses to innovate with GenAI.

Governments alone, however, cannot increase trust in GenAI. IBM maintained that businesses themselves must use GenAI responsibly to increase its trustworthiness and shared the principles that guide their own responsible, AI usage.

First, IBM believes that AI systems must be transparent, explainable, and private. To demystify and increase confidence in this advanced technology, AI systems must possess the ability to provide a human-interpretable explanation for its predictions and insights, include and share information on how it has been designed and developed, as well as prioritize and safeguard consumers’ privacy and data rights.

Next, they must be fair. To prevent bias, AI systems must promote the equitable treatment of individuals or groups, which may also depend on the context in which the AI system is used.

Lastly, AI systems must be robust, which IBM defines as having the ability to effectively handle exceptional conditions, such as abnormal prompts.

If they are successfully able to cultivate trust in AI, IBM predicts that organizations will experience improved adoption rates and the ability to operationalize AI, lessen risks and frequency of failures, acquire a competitive advantage and differentiation, as well as see a higher return on AI investments. Moreover, they will also gain public trust and consumer loyalty while retaining and increasing investor confidence.

“Essentially, if you don’t have trustworthy AI, you won’t be able to adopt AI with speed and scale,” said Christina Montgomery, IBM vice president and chief privacy and trust officer.

“We talked in the beginning about [how] privacy concerns, bias concerns, ethical concerns are holding companies back from adopting AI. The way to get there is to incorporate [a trustworthy AI approach in] organizational governance,” Montgomery said.

Subscribe

- Advertisement -spot_img

RELEVANT STORIES

spot_img

LATEST

- Advertisement -spot_img