3 emerging AI-powered cyber threats and how to stay protected from them in 2025
AI has penetrated deep into our lives, both maliciously and benevolently. The situation has worsened to a point where now one in 10 adults worldwide has fallen victim to an AI-voice cloning scam, and 77% of them have lost money.
Organizations, governments, and individuals are still navigating ways and policies to stay shielded from this new, double-edged sword. It’s difficult to stay protected because threat actors are way ahead of the curve with their malicious intentions, and policymakers and custodians of cybersecurity are badly falling behind. With advanced generative AI tools, deception has become increasingly feasible for attackers.
As per a report by Fortinet, AI-powered automated scans have gone up by 16.7% year-over-year, averaging 36,000 scans per second. This has ultimately translated into a steep rise of 42% in credential-based attacks and the unleashing of 1.7 billion stolen credentials on the dark web. This scale of automation is enabling bad actors to probe and abuse cyber systems at hyper-speed, which in turn makes it difficult for traditional defenses to counteract the attacks.
As per Microsoft’s Anti-Fraud Team, AI-backed cyberattacks are being reported across the world. China and Germany are among the countries most severely affected, as they are notable hubs for e-commerce and online services. These two regions are getting hit badly because the bigger a region’s marketplace is, the more likely it is that fraud will grow with it.
This blog specifically talks about the top 3 emerging cyber threats that are backed by AI and how you can thwart them in time.
E-commerce frauds
Previously, it would take threat actors weeks to design a new website. However, today it’s very easy to set up a new e-commerce website using AI and other tools that require no hardcore technical expertise; just input a few prompts, and it’s done. These websites appear legitimate, as they closely mimic the original ones, making it difficult for consumers to distinguish them as fake.
Malicious actors create detailed product descriptions, professional-looking graphics, and believable customer reviews that dupe visitors into thinking they are on a genuine platform. They end up paying for products or services that never really existed, jeopardizing their residential and financial information, in addition to losing the money spent on purchasing the so-called product or service.
For example, in April 2025, scammers targeted UK consumers by advertising fake Bonmarché ‘shop closing’ sales on Facebook, linking to counterfeit websites. Victims paid and never received any products.
These scams are also being fueled by AI-powered customer service chatbots, which add another layer of deception by convincing users through chat interactions. These bots use sophisticated language to deceive people by providing fake yet polite responses, thereby delaying refunds and maintaining the website’s appearance of genuineness and trustworthiness for as long as possible.
Job and employment fraud
In job and deployment fraud, threat actors create fake listings on various job platforms. This scam operates by phishing job seekers, creating fake profiles with stolen credentials, and then posting jobs with AI-generated descriptions. They even utilize AI to launch email campaigns that drive traffic to the fake postings. The scam goes to the extent where they even arrange AI-powered interviews over calls and video meetings, leaving no room for doubt for job seekers.
In this kind of scam, fraudsters ask for personal information, such as resumes or bank account details, under the pretext of verifying the applicant’s information.
So, if you get a random text or email offering a job that pays a lot but doesn’t require any real skills, it’s probably a scam.
Be careful if the job offer asks you to pay money, seems too good to be true, comes out of nowhere, or doesn’t use proper emails or company platforms, as these are all big red flags.
Tech support scams
In tech support scams, cyber actors trick targets into unnecessary technical support services that claim to fix a software or device problem that may not actually exist. All they want is to gain remote access to their device so that they can get their hands on critical data or install malware to conduct malicious activities.
In fact, in early June 2025, two sophisticated call centers were raided and shut down, as they were involved in impersonating support lines from companies like Microsoft and Apple. They targeted Japanese citizens with fake virus pop-ups telling them that their devices were compromised, prompting them to call the bogus tech support. The scammers gained remote access to devices, insisted on urgent payment, and coerced victims into transferring money to mule accounts or buying crypto and gift cards. The total scam amount exceeded USD 144,000.
Cyber hygiene practices against AI-backed cybercrimes
Here’s what you can do to prevent becoming victims of AI-powered deceptions-
Strengthen employer authentication
Attackers often impersonate or spoof the domains of reputable companies to send job offers. So, if you also own a domain, shield it with SPF, DKIM, and DMARC. With these three email authentication protocols in place, no emails sent from unauthorized sources land in the primary inboxes of targets. Such bogus emails are either rejected or marked as spam, saving your brand name from being dragged into malicious activities.
Additionally, as an employer, maintain a single official domain (e.g., careers.yourcompany.com) for all hiring-related communication.
Monitor for AI-based recruitment scams
When sitting in an online interview call, watch out for these red flags-
- Zero personalisation, like not referring to your resume or asking follow-ups to your answers.
- Use of template-like responses, such as saying ‘That’s great to hear’ after everything you say.
- Maintained the same enthusiasm and facial expression throughout the meeting.
- Slight delays in response timings, because tiny lags are inevitable when AI generates responses.
Be wary of too-good-to-be-true job opportunities
If a job opportunity is way too lucrative, there is a high possibility of it being bogus. It’s better to get in touch with the officials through alternative means, such as contacting the support number available on the official website or sending an email.
Avoid sharing personal information with unverified sources
Watch out for warning signs in job ads—like if they ask you to pay money, chat only on WhatsApp or text messages, use personal Gmail IDs instead of official emails, or tell you to contact someone on their personal phone. These are big signs that the job could be fake.