Saturday, May 4, 2024

Is AI about to upend PH journalism, education sectors? Experts give their take

ChatGPT only revealed the obvious. The artificial intelligence (AI) revolution is here and its already changing how people work. As employees and students scramble to future-proof themselves, though, they usually don’t know where to start learning about this futuristic technology.

In an attempt to find an answer to this, the Philippine Communication Society (PCS), in partnership with TVUP, organized a series of talks under the banner “AI, naku!”. The seminars aimed at demystifying AI and explaining how it may change the local media, communication, and education fields.

Last June 28, the series held its final hybrid seminar that specifically tackled the impact of the technology on the country’s journalism and education sectors.

“The AI revolution or evolution… is happening whether we like it or not,” asserted Dr. Elena Pernia, PCS president and advisor for public affairs, during the seminar’s opening remarks.

“Without a doubt, we — the academics, the administrators, and the practitioners — need to seriously examine the impact of AI in our current and future practices,” she said.

The culminating event brought together local and international experts on the topic. The panelists included Dominic Ligot, Cirrolytix founder and CTO as well as PCIJ board of trustees member, Jo Hironaka, advisor for communication and information at UNESCO’s multicultural regional office–Bangkok, Dr. Didith Rodrigo, head of the Ateneo Laboratory of the Learning Sciences and professor in the Department of Information Systems and Computer Science, and Jean Linis-Dinco, cybersecurity Ph.D. student.

Ligot, Hironaka, and Rodrigo began by discussing the potential benefits offered by AI for journalists, as well as teachers and students.

Ligot kicked off his presentation on journalism and education by clarifying that generative AI tools like ChatGPT are not search engines that pull existing data from databases.

He said these tools generate data from scratch based on patterns they remember, so they may not produce factually reliable data and their outputs should not be submitted as original work.

That being said, he noted that the technology is ideal for analyzing and summarizing existing material and organizing ideas. When used this way, Ligot maintained that it can increase journalists and educators’ productivity and efficiency. 

Hironaka added that AI will help journalists by handling many newsroom tasks, thereby enabling journalists to focus on more in-depth reporting.

He also briefly mentioned how newsroom organizations are having dialogues with Internet companies about licensing their content for generative AI like ChatGPT, since these tools will need to constantly update the data they’re trained on.

“The Web 2.0 era has had devastating consequences as far as shifting ad revenue to Internet companies and away from traditional media,” Hironaka said.

“So in some sense, generative AI may actually force open a sustainable revenue and licensing model that did not and could not exist before.”

Rodrigo, on the other hand, moved the discussion to AI in education (AIED). The AIED field examines how AI can be utilized to create flexible learning environments that are personalized, inclusive, engaging, and effective.

Rodrigo said she is personally hopeful that generative AI tools can improve students learning experience and make teachers roles easier, even though in the short-term tools like ChatGPT may require teachers to rethink how they structure their assignments.

While AI has great promise for these sectors, the panelists also considered the various problems that could arise from misunderstanding or abusing AI tools. These issues could range from increasing the spread disinformation, infringing on copyrights, and enforcing biases.

They said AI can assist the spread of disinformation as malicious actors utilize it to create seemingly legitimate content at a quicker rate than ever before.

On copyright infringement, since generative AI is trained on the data gathered from all around the Internet to generate its outputs, companies and individuals are already protesting the use of their original content without renumeration.

Regarding the issue of prejudices in in AI, Rodrigo stated that, “Bias is almost inevitable in AI.” Thus, to prevent reproducing these biases, she recommended being aware of “the value system embodied by the AI, whose priorities and interests are being represented by the AI, and [questioning if] these interests [are] compatible with morality and the law.”

Similarly, Linis-Dinco urged the audience to focus on what AI reveals about society.

“These kind of technologies are a part of and have an impact on wider systems of power, economic relations, and resource consumption. So a comprehensive understanding and critique of these technologies must take these issues into account,” Linis-Dinco insisted.

The experts additionally agreed that the tools require their own brand of regulation that should be crafted with the aid of academics and practitioners who understand the technology.

Ligot summed up the discussion: “In my view, if something is potentially harmful, we have to regulate it.”

Christian Samonte, PCS director and market intelligence specialist, then closed the event. “As we leave this symposium, I encourage every one of us to carry forward the knowledge and insights gained here. Let us embrace the opportunities that AI presents, while criticizing it and remaining vigilant about its potential pitfalls.”

Subscribe

- Advertisement -spot_img

RELEVANT STORIES

spot_img

LATEST

- Advertisement -spot_img