How Phishing Actually Works and How to Spot It
Phishing isn't just bad grammar in a Nigerian prince email anymore. AI-generated phishing is personalized, grammatically perfect, and harder to spot than ever.

Employee training matters, but it's not a complete email security strategy. Here's what else you should have in place — and how AI has changed what 'suspicious' looks like.
Most email security advice starts and ends with the same sentence: "Don't click suspicious links." It's well-intentioned. It's also dangerously incomplete. The FBI's Internet Crime Complaint Center reported $2.77 billion in business email compromise losses in 2023 alone — and the vast majority of those incidents didn't involve anyone clicking a link at all. They involved an employee responding to what looked like a perfectly normal email from their CEO, their vendor, or their attorney.
Employee training matters. Nobody is arguing otherwise. But treating training as a complete email security strategy is like treating seatbelts as a complete vehicle safety system. You need the airbags, the crumple zones, and the brake assist too. Here's what a real email security posture actually includes — and why the threat landscape shifted permanently when AI entered the picture.
Before an email ever reaches a human inbox, three protocols should be working together to verify that the sender is who they claim to be. Most organizations have heard of at least one of them. Very few have all three configured correctly.
Sender Policy Framework is a DNS record that says "these mail servers are authorized to send email for this domain." When your mail server receives an incoming message claiming to be from example.com, it checks example.com's SPF record. If the sending server isn't on the list, the message fails the check.
The concept is simple: a whitelist published in DNS. The reality is that SPF alone isn't sufficient. It only checks the envelope sender (the return-path), not the "From" address that humans actually see in their inbox. An attacker can spoof the visible From address while using a different envelope sender that passes SPF just fine.
DomainKeys Identified Mail adds a cryptographic signature to outgoing email. The sending server signs the message with a private key; the receiving server checks that signature against a public key published in DNS. If the signature verifies, two things are confirmed: the message actually came from the claimed domain, and nobody modified it in transit.
DKIM is stronger than SPF because it's tied to the message content itself, not just the sending server's IP address. But it has a limitation too — it doesn't tell the receiving server what to do when a check fails. That's where the third protocol comes in.
Domain-based Message Authentication, Reporting, and Conformance ties SPF and DKIM together and adds an enforcement policy. A DMARC record tells receiving mail servers: "Here's what to do when an email claims to be from my domain but fails authentication." The options are none (monitor only), quarantine (send it to spam), or reject (drop it entirely).
DMARC also introduces alignment — requiring the domain in the visible From address to match the domain that passed SPF or DKIM. This closes the gap that SPF leaves open. And it generates reports, so domain owners can see who's sending email on their behalf, legitimately or otherwise.
According to a 2024 analysis by Valimail, only 28.5% of domains worldwide had a DMARC record at enforcement level (quarantine or reject). The rest were either monitoring-only or had no DMARC at all. If your domain doesn't have DMARC at enforcement, anyone can send email that appears to come from you — and your customers' mail servers will deliver it.
Authentication protocols verify sender identity. Email filtering evaluates the message itself. These are different jobs, and you need both.
Modern email security gateways — whether built into your provider like Microsoft Defender for Office 365 or Google Workspace's built-in protections, or layered on top through a third-party service — analyze incoming messages for known malware signatures, suspicious attachments, URL reputation, sender reputation, and behavioral anomalies. The good ones sandbox attachments (detonating them in an isolated environment before delivery) and rewrite URLs to route clicks through a scanning proxy.
Quarantine policies matter as much as the filtering itself. A well-configured quarantine sends borderline messages to a holding area where an admin can review them, rather than delivering them to the inbox with a yellow warning banner that everyone ignores. The goal is to reduce the number of decisions you're asking humans to make — because every decision is a place where mistakes happen.
This connects directly to the baseline security measures every business should have in place. Email filtering isn't advanced. It's foundational.
Business email compromise is the most financially damaging category of cybercrime in the United States, and it works precisely because it doesn't look like a cyberattack. There's no malware. No attachment. No suspicious link. Just a well-crafted email from what appears to be a trusted person — a CEO, a vendor, an attorney — asking someone to do something reasonable: update payment details, wire funds to a new account, send over a file.
The reason BEC works is that it exploits trust and authority, not technology. The attacker either compromises an actual email account (through credential phishing or password reuse) or spoofs one convincingly enough that the recipient doesn't question it. The "attack" is a conversation. And the damage is a bank transfer.
Detection requires looking at behavioral signals rather than technical indicators. Unusual payment requests. Changes to established processes. Urgency language ("I need this handled before end of day, I'm in a meeting"). First-time wire destinations. These patterns are detectable — but not by a spam filter looking for malware signatures. They require a different class of analysis, one that understands business context.
This is where email security intersects with the broader principle described in how phishing actually works: the most effective attacks don't exploit software vulnerabilities. They exploit human decision-making under pressure.
Three years ago, business email compromise had a telltale: it often read wrong. Grammar mistakes. Awkward phrasing. Slightly off formatting. Trained employees could spot the difference between a real CFO email and a spoofed one because the language didn't quite match. The Nigerian prince email was a caricature, but even sophisticated BEC attempts often had detectable seams.
That era is over.
Generative AI eliminated the language barrier for attackers entirely. Large language models produce flawless, context-appropriate business English — or any other language — on demand. An attacker can feed an AI model a sample of a CEO's writing style from public posts and interviews, and get back emails that are indistinguishable from the real thing. The old advice of "look for grammar mistakes" is now useless against any attacker with access to a free AI chatbot.
But language is only the beginning. Deepfake voice technology now requires as little as three seconds of sample audio to produce a convincing clone, according to research published by McAfee in 2023. A multinational company in Hong Kong lost $25 million in early 2024 after an employee attended a video call where every other participant — including the CFO — was a deepfake. The employee saw familiar faces, heard familiar voices, and followed instructions that seemed entirely routine. Deloitte's Center for Financial Services projected deepfake-related fraud losses reaching $40 billion by 2027.
The threat landscape now includes:
Deepfake fraud attempts increased 1,300% between 2022 and 2024, according to data from Onfido's Identity Fraud Report. The cost per successful BEC attack averaged $137,132 in 2023, per the FBI's IC3 data — and that's the average, not the ceiling.
The implication for email security is straightforward: you cannot rely on human judgment to detect AI-generated deception. The quality is too high, the speed is too fast, and the attack surface now extends beyond email into voice and video. The infrastructure layer — SPF, DKIM, DMARC, email filtering, behavioral analysis — is no longer a nice-to-have. It's the only layer that scales against AI-powered threats.
Putting this together, a defensible email security strategy has five layers, not one:
Notice that training is layer five, not layer one. It's the last line of defense, not the first. Everything above it exists to reduce the number of threats that ever require a human judgment call. The fewer decisions you ask people to make about potentially malicious email, the fewer opportunities there are for mistakes.
This principle extends to how you configure HTTP security headers and every other technical control: the best security reduces the attack surface before human decision-making enters the picture.
Here's the calculation most organizations haven't done: if your phishing simulation click rate is 3% — which is considered good — and your company receives 10,000 external emails per month, that's 300 potential click-throughs on anything that makes it to an inbox. Your filtering doesn't need to be perfect. But it does need to exist, and it needs to catch the vast majority of threats before they arrive.
The cost of deploying SPF, DKIM, and DMARC is effectively zero — they're DNS records. The cost of configuring email filtering properly is a few hours of administrative time. The cost of establishing verification procedures for financial transactions is a policy document and a team meeting. None of this is expensive. None of it is complicated. All of it meaningfully reduces risk in ways that training alone never will.
"Don't click the link" is a fine thing to tell people. It's just not a security strategy.
You can check SPF and DMARC records using free tools like MXToolbox or dmarcian. Enter your domain and they'll show you what's published in DNS. DKIM is slightly harder to check externally because you need to know the selector (a label specific to your email provider), but your email provider's documentation will tell you what it should be. If you use Google Workspace or Microsoft 365, both have setup guides for all three protocols. The important thing isn't just that the records exist — it's that DMARC is set to quarantine or reject, not just "none."
Yes, but with realistic expectations. Phishing simulations are useful for establishing a baseline, identifying teams or roles that need additional support, and keeping awareness current. They're not useful as a standalone security measure, and they become counterproductive if they're punitive. The goal is to make reporting easy, not to catch people failing. Organizations that punish employees for clicking simulated phishing links end up with underreporting of real incidents — which is worse than the problem the simulation was trying to solve.
Don't reply to the email. Don't use any contact information provided in the suspicious message. Instead, contact the apparent sender through a separately verified channel — a phone number you already have on file, a direct message on an internal platform, or an in-person conversation. If the request involved a financial transaction that was already executed, contact your bank immediately. BEC wire transfers can sometimes be recalled if reported within 24-48 hours. Report the attempt to your IT team (or your managed security provider) and to the FBI's IC3 at ic3.gov.
Defensive AI has some structural advantages. It can analyze every incoming message at scale, build behavioral baselines for every sender and recipient in your organization, and flag deviations in real time — things no human team could do manually. But it's not a silver bullet. AI-powered defenses and AI-powered attacks are in a continuous escalation cycle, and the defense will always lag behind novel attack techniques by some margin. The correct approach is defense in depth: AI-powered filtering as one layer among several, combined with authentication, process controls, and human judgment. No single layer is sufficient. All of them together are resilient.
Phishing isn't just bad grammar in a Nigerian prince email anymore. AI-generated phishing is personalized, grammatically perfect, and harder to spot than ever.
Prevention costs $5K-$15K per year. A single incident averages $254,445. The math is a 50-to-1 ratio. The psychology explains why 47% of small businesses still allocate zero.
60% of breaches involve the human element. Technology alone can't fix that. Security culture means everyone knows their role — not just the person who manages the firewall.
Work With Us
Kief Studio builds, protects, automates, and supports full-stack systems for businesses up to $50M ARR.
Newsletter
Strategy, psychology, AI adoption, and the patterns that actually compound. No spam, easy to leave.
Subscribe