Friday, May 23, 2025

Report says AI creating fake content for cyberattacks at rapid rate

A new report from software behemoth Microsoft has revealed that AI has started to lower the technical bar for fraud and cybercrime actors looking for their own productivity tools, making it easier and cheaper to generate believable content for cyberattacks at an increasingly rapid rate.

The “Cyber Signals” report said AI software used in fraud attempts runs the gamut — from legitimate apps misused for malicious purposes to more fraud-oriented tools used by bad actors in the cybercrime underground.

It reported that AI tools can scan and scrape the Web for company information, helping cyberattackers build detailed profiles of employees or other targets to create highly convincing social engineering lures.

“In some cases, bad actors are luring victims into increasingly complex fraud schemes using fake AI-enhanced product reviews and AI-generated storefronts, where scammers create entire websites and e-commerce brands, complete with fake business histories and customer testimonials,” the report said.

“By using deepfakes, voice cloning, phishing emails, and authentic-looking fake websites, threat actors seek to appear legitimate at wider scale.”

According to the Microsoft Anti-Fraud Team, AI-powered fraud attacks are happening globally, with much of the activity coming from China and Europe, specifically Germany due in part to Germany’s status as one of the largest e-commerce and online services markets in the European Union (EU).

The larger a digital marketplace in any region, the more likely a proportional degree of attempted fraud will take place, the report said.

E-commerce fraud

The study said fraudulent e-commerce websites can be set up in minutes using AI and other tools requiring minimal technical knowledge.

“Previously, it would take threat actors days or weeks to stand up convincing websites. These fraudulent websites often mimic legitimate sites, making it challenging for consumers to identify them as fake,” it noted.

“Using AI-generated product descriptions, images, and customer reviews, customers are duped into believing they are interacting with a genuine merchant, exploiting consumer trust in familiar brands.”

The report said AI-powered customer service chatbots add another layer of deception by convincingly interacting with customers.

“These bots can delay chargebacks by stalling customers with scripted excuses and manipulating complaints with AI-generated responses that make scam sites appear professional,” it said.

Image from Microsoft

Job and employment fraud

The study also observed that the rapid advancement of generative AI has made it easier for scammers to create fake listings on various job platforms.

“They generate fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-powered email campaigns to phish job seekers. AI-powered interviews and automated emails enhance the credibility of job scams, making it harder for job seekers to identify fraudulent offers,” it said.

To prevent this, the report said job platforms should introduce multifactor authentication for employer accounts to make it harder for bad actors to take over legitimate hirers’ listings and use available fraud-detection technologies to catch suspicious content.

“Fraudsters often ask for personal information, such as resumes or even bank account details, under the guise of verifying the applicant’s information. Unsolicited text and email messages offering employment opportunities that promise high pay for minimal qualifications are typically an indicator of fraud,” it said.

The study added: “Employment offers that include requests for payment, offers that seem too good to be true, unsolicited offers or interview requests over text message, and a lack of formal communication platforms can all be indicators of fraud.”

Image from Microsoft

Tech support scams

According to the report, there is also another rising incidents of tech support scams, which are a type of fraud where scammers trick victims into unnecessary technical support services to fix a device or software problems that don’t exist.

“The scammers may then gain remote access to a computer — which lets them access all information stored on it, and on any network connected to it or install malware that gives them access to the computer and sensitive data,” it explained.

Tech support scams are a case where elevated fraud risks exist, even if AI does not play a role, the report said.

Consumer protection tips

The report said fraudsters exploit psychological triggers such as urgency, scarcity, and trust in social proof. Consumers should be cautious of:

  • Impulse buying — Scammers create a sense of urgency with “limited-time” deals and countdown timers.
  • Trusting fake social proof — AI generates fake reviews, influencer endorsements, and testimonials to appear legitimate.
  • Clicking on ads without verification — Many scam sites spread through AI-optimized social media ads. Consumers should cross-check domain names and reviews before purchasing.
  • Ignoring payment security — Avoid direct bank transfers or cryptocurrency payments, which lack fraud protections.

The report said job seekers should verify employer legitimacy, be on the lookout for common job scam red flags, and avoid sharing personal or financial information with unverified employers.

  • Verify employer legitimacy — Cross-check company details on LinkedIn, Glassdoor, and official websites to verify legitimacy.
  • Notice common job scam red flags — If a job requires upfront payments for training materials, certifications, or background checks, it is likely a scam. Unrealistic salaries or no-experience-required remote positions should be approached with skepticism. Emails from free domains (such as [email protected] instead of [email protected]) are also typically indicators of fraudulent activity.
  • Be cautious of AI-generated interviews and communications — If a video interview seems unnatural, with lip-syncing delays, robotic speech, or odd facial expressions, it could be deepfake technology at work. Job seekers should always verify recruiter credentials through the company’s official website before engaging in any further discussions.
  • Avoid sharing personal or financial information — Under no circumstances should you provide a Social Security number, banking details, or passwords to an unverified employer.

Subscribe

- Advertisement -spot_img

RELEVANT STORIES

spot_img

LATEST

- Advertisement -spot_img