In February 2025, various news sources highlighted an emerging risk aimed at Google Gmail users. Cybercriminals employed artificial intelligence to refine phishing messages, rendering them far more persuasive than before.
The system gathered publicly accessible information from social media platforms, websites, and discussion boards online. It then crafted emails that convincingly mimicked correspondence from known contacts, relatives, or colleagues.
Additionally, the AI produced appropriate domain names to serve as the apparent origins of these messages. This tactic earned the name 'Deepphish,' combining elements of deep learning and phishing techniques.
Although the initial coverage prompted inquiries—such as the specific focus on Gmail recipients—it underscored a trend long anticipated by specialists: illicit organizations are adopting AI to sharpen their cyber operations.
Traditional phishing efforts have long been undermined by obvious sender details. Recipients can often spot fakes through mismatched email addresses.
For instance, an alert supposedly from a service like Netflix or Disney arriving from a generic domain such as 'brandbot.com' is obviously suspicious, regardless of the message's polished design.
In contrast, AI-driven phishing leverages advanced algorithms to create sender addresses and linked URLs that align seamlessly with the email's narrative.
A team under Alejandro Correa Bahnsen at Cyxtera Technologies, a data center provider in the United States, examined the potency of these methods.
They created the Deepphish algorithm, training it on over one million historical phishing URLs gathered from email campaigns to recommend fitting domains.
The training incorporated two distinct perpetrator profiles to simulate varied attack scenarios.
Typically, phishing attempts reveal themselves through improbable sender formats, like a Disney notification from '[email protected].'
Using AI-crafted addresses, the researchers boosted attack success rates from 0.69 percent to 20.9 percent in one scenario and from 4.91 percent to 36.28 percent in the other.
These outcomes were detailed in a published research paper.
Originally denoting the Cyxtera algorithm, 'Deepphish' now broadly describes AI-assisted phishing schemes.
Such operations adhere to a consistent sequence. Initially, the AI scans the target's personal network via social media, forums, and leaked data from corporate breaches or site hacks. Greater data volume enables finer customization of the phishing content.
Cyxtera staff explored how AI-selected sender domains elevate phishing efficacy in their analysis.
Subsequently, attackers secure a relevant domain and derive a sender email via tools like Deepphish.
The AI composes the message body, including a pertinent subject, personalized greeting, and realistic wording that fits the impersonated sender's style.
This level of customization lends exceptional believability compared to generic scams.
The objective of Deepphish campaigns is to build sufficient trust for victims to interact with attachments or hyperlinks.
Interactions trigger automated responses: attachments deploy malware for installation, while links direct to counterfeit sites soliciting sensitive data like credit card numbers or streaming credentials.
Deepphish represents an early example, but numerous tools now automate phishing creation for offenders.
Examples include FraudGPT, WormGPT, and GhostGPT, which produce tailored emails targeting people or businesses.
Users might direct these to simulate a Netflix prompt for account verification on a bogus page.
They also handle queries like cracking Wi-Fi security or coding keyloggers to transmit keystrokes remotely.
Tools like WormGPT rely on AI for polished, deceptive phishing communications, often customized for individuals or firms.
Mainstream models like ChatGPT incorporate safeguards against harmful prompts, and their proprietary nature prevents alterations.
Yet, dark web techniques can trick these models with crafted prompts to bypass restrictions.
Meanwhile, some groups adapt open-source large language models by stripping safety features.
The Stopwatch AI platform illustrates the extent of AI in malware generation. It enables crafting code that circumvents leading antivirus defenses in three stages.
First, under 'Choose Platform,' users pick the target system: options span Mac, Windows, Linux, AWS, Google Cloud, or Microsoft Azure.
The Stopwatch AI interface simplifies malware development via AI, starting with OS selection for the assault.
Second, 'Choose Defence' lists nine antivirus solutions, such as Microsoft Defender, Eset Endpoint Protection Advanced, McAfee Endpoint Security, Symantec Endpoint Security, and Kaspersky Endpoint Security for Business.
This phase targets specific antivirus vulnerabilities for exploitation.
Third, 'Choose Attack' lets users select malware types, including adware, spyware, ransomware, keyloggers, or data exfiltration.
Stopwatch AI provides ten malware categories, from keyloggers to ransomware, requiring user registration to proceed.
Upon selection, the site prompts for credentials, allowing sign-up via Google, GitHub, or Microsoft accounts. Post-registration, AI generates the malware.
Usage demands acceptance of terms prohibiting real-world attacks, positioning the tool for educational malware research only.
All generated projects link to user accounts for storage.
To counter threats, scrutinize sender addresses for authenticity and watch for red flags.
Antivirus software refreshes its threat database daily from vendor servers, cataloging traits of recent malware for effective blocking.
Yet, this defense is weakening. Dark web kits have long enabled non-experts to build malware, and many are slight tweaks of known strains, evading detection by altering signatures—explaining the daily influx of 560,000 novel variants reported by vendors.
AI elevates this by intelligently altering code to dodge recognition, building on antivirus training against familiar patterns.
Acronis showcased this by submitting malware to VirusTotal, initially flagged by nine engines, but post-AI modification via Grok3, only one detected it. Further tweaks with Gemini 2.0 Flash and Deepseek R1 rendered it invisible to all.
Hackers can thus refine malware for near-total evasion, depending on the AI employed.
Still, antivirus heuristics and behavioral analysis can identify AI-altered threats.
Email spoofing has declined since 2014 standards like SPF, DKIM, and DMARC were adopted by providers, preventing domain forgery.
For an address like '[email protected],' the domain 'pcworld.com' verifies authenticity; disabled protocols route suspicious mails to spam.
Residual spoofing persists via name alterations in clients like Outlook under account settings or reply-to hijacks. Attackers also use lookalike domains, such as 'pcworId.com' with a capital 'I'.
This piece draws from content in our affiliate outlet PC-WELT, adapted from its German original.
Roland Freist, an independent technology writer, specializes in Windows, software, networking, cybersecurity, and web developments.