We’re seeing artificial intelligence (AI) pop up everywhere, helping us make choices, connect with others, and find information. Think about how AI suggests what to watch next, helps companies decide who to hire, or even creates news articles.
These AI tools, especially the ones that can generate text, images, and videos (called generative AI), are getting really good, really fast. But as they become more convincing, something crucial is becoming harder to maintain: trust.
Imagine AI creating a video of a celebrity endorsing a product they’ve never used, or making up news stories that look completely real. This isn’t science fiction anymore.
We’ve already seen AI used to create fake social media posts from Filipino influencers, trying to sway opinions about the West Philippine Sea. This shows us that AI’s ability to create things that look real can be dangerous.
This raises some serious questions. If AI can be used to twist the truth, who decides what’s real and what’s fake? What happens when people with bad intentions use AI to spread lies so quickly that we can’t even keep up?
This is why human reflection is the first step towards using AI responsibly.
We need ethical guidelines for AI. It’s not enough for AI to just work efficiently. It needs to be fair, we need to understand how it makes decisions, and it needs to be used honestly.
Having a “human in the loop” isn’t just about getting approval or using certain computer programs. It means putting our values and a sense of responsibility at the heart of every process that involves AI, whether it’s hiring someone, helping a customer, or creating content.
This isn’t just a job for tech experts or governments. It’s a leadership responsibility. CEOs, teachers, online personalities, and politicians all need to think carefully about their roles in a world where it’s easy to fake influence and where trust is easily broken.
Now is the time to focus on teaching people how to understand the digital world, being open and honest in our communication, and creating rules that protect people instead of taking advantage of them.
As I was finishing my book, Smarter with AI, these issues felt very real. I saw how AI could boost creativity, speed things up, and reach more people. But I also realized how easily we could forget the human impact if we’re not careful. AI might help us build things faster, but if we lose things like truth, fairness, and respect for people along the way, then we’ve lost more than we’ve gained.
So, what can leaders do right now?
Start by looking closely at where and how AI is being used in your organization. Create a clear set of ethical rules for AI that everyone understands, both for internal use and for tools that customers use.
Teach your team how to spot and deal with misinformation. Always double-check information before sharing it, especially during elections. Be careful of content that tries to make you feel strong emotions, and always check where your information is coming from.
Trust is the foundation of everything. And in this world increasingly powered by AI, it’s the one thing that still has to come from us — from human beings.