Building a Security Culture When Everyone Thinks It's IT's Job
Cybersecurity • Updated • 9 min read

Building a Security Culture When Everyone Thinks It's IT's Job

60% of breaches involve the human element. Technology alone can't fix that. Security culture means everyone knows their role — not just the person who manages the firewall.

Building a Security Culture When Everyone Thinks It's IT's Job

Sixty percent of breaches involve the human element. Not zero-day exploits. Not nation-state attackers. People — clicking links, reusing passwords, sharing credentials, falling for well-crafted pretexts. That number comes from the Verizon Data Breach Investigations Report, and it has held steady for years. Technology alone can't fix that. Firewalls don't prevent someone from wiring $250,000 to a spoofed vendor. Endpoint detection doesn't stop an employee from handing over their MFA code to a caller who sounds exactly like the help desk.

Most organizations respond to this reality with awareness training. Annual phishing simulations, a compliance video no one watches twice, maybe a poster in the break room. And then leadership wonders why people still click things. The problem isn't that employees don't know phishing exists. The problem is that security lives in a silo — somewhere between IT and compliance — and everyone else treats it like someone else's responsibility.

Security culture means everyone knows their role. Not just the person who manages the firewall.

Why "Security Is Everyone's Job" Fails

You've heard the phrase. It's on every CISO's slide deck. And it almost never works, because it's too vague to act on.

Telling a marketing coordinator that "security is everyone's job" without telling them what that means for their Tuesday afternoon is like telling someone to "be healthy" without mentioning diet, sleep, or exercise. It's directionally correct and operationally useless.

The phrase fails for three reasons:

  • No clear behaviors attached. People can't do "security." They can verify a wire transfer request by calling the requester on a known number. They can report a suspicious email without deleting it. They can lock their laptop when they walk away. Those are behaviors. "Security" is an abstraction.
  • No feedback loop. When someone does the right thing — flags a phishing attempt, questions an unusual request — nothing happens. No acknowledgment, no data on whether it mattered. The behavior extinguishes because it was never reinforced.
  • Punishment asymmetry. In most organizations, clicking a phishing link triggers consequences. Reporting one triggers nothing. This trains people to hide mistakes rather than surface them, which is the exact opposite of what security requires.

A real security culture replaces the slogan with systems. It makes the right behaviors visible, easy, and rewarded.

Team gathered around a conference table discussing security procedures with a whiteboard showing incident response steps — building security culture through shared responsibility
Security culture starts when the entire team sees security as part of their work, not a checkbox managed by IT.

Five Things That Actually Build Security Culture

These aren't theoretical. They're the patterns that show up in organizations where employees report phishing at four times the rate of organizations that only run annual training — a finding consistent across multiple studies, including research published by the SANS Security Awareness program.

1. Make security visible

If the only time employees hear about security is when something goes wrong, security becomes associated with punishment and inconvenience. Flip that.

Share metrics. "Last month, our team reported 14 suspicious emails. Three of them were real phishing attempts that never reached a second person." That's a win. Make it visible. Put it in the all-hands. Add it to Slack. When people see that their reports lead to outcomes, reporting becomes a habit rather than an afterthought.

Visibility also means leadership participation. If the CEO doesn't attend the security briefing, everyone below them gets the message that it's optional. Culture flows downward. If you've already started thinking about the foundational things every business should do first, visibility is what turns those foundations into daily practice.

2. Make reporting easy and safe

Every extra step between "this looks suspicious" and "I reported it" is a step where someone decides it's not worth the effort. One-click reporting buttons in email clients. A dedicated Slack channel. A phone number. Whatever fits your team's workflow — the mechanism matters less than the friction.

More important than the mechanism: make it safe. "No-blame reporting" can't be a policy that exists on paper while managers privately penalize the person who clicked. People will test whether reporting is actually safe. The first time someone reports a mistake and gets thanked instead of reprimanded, that story travels. It becomes the proof that the policy is real.

3. Celebrate catches, don't punish mistakes

This is the hardest shift for most organizations, because the instinct after a near-miss is to find who was at fault. Resist that instinct.

When someone reports a phishing email, that's a successful detection. Treat it like one. When someone admits they clicked a link and immediately reported it, that's fast incident response. The worst outcome isn't that someone clicked — it's that someone clicked and didn't tell anyone for three days because they were afraid of the reaction.

Organizations with blameless incident cultures — borrowed from the same principles that make aviation and healthcare safer — consistently outperform punitive ones in detection speed and containment. Your people are your sensors. Don't train them to go dark.

4. Bake it into onboarding

Security training that happens once a year, disconnected from any real context, doesn't stick. But security training that happens on someone's first week, alongside "here's your laptop and here's how we work" — that becomes part of how they understand the organization.

Day-one onboarding should cover: how to report something suspicious, what a phishing attempt actually looks like (not the 2015 version with typos — the 2026 version that's indistinguishable from a real email), who to call if something feels off, and what the company's security policy says in plain language.

First impressions set defaults. If security is part of someone's first impression of the company, it becomes a default behavior rather than an add-on they encounter six months in.

5. Make it part of performance

What gets measured gets managed. If security behaviors appear nowhere in performance reviews, goal-setting, or team metrics, they will always lose to the things that do.

This doesn't mean penalizing people for getting phished. It means recognizing teams that maintain clean credential hygiene, acknowledging managers who build security awareness into their team rhythms, and including "follows security protocols" as a baseline expectation alongside "meets deadlines" and "communicates clearly."

For organizations that run async-first or distributed teams, this is especially critical. When you can't see someone locking their laptop, you need cultural norms strong enough to operate without line-of-sight supervision.

Employee using a one-click phishing report button in their email client — reducing friction in security reporting to increase incident detection rates
The easier you make it to report something suspicious, the more reports you get — and every report is a detection opportunity.

The Security Champions Model for Small Teams

Enterprise companies have dedicated security teams. A 20-person company doesn't. But a 20-person company can have security champions — people embedded in each team or function who serve as the first point of contact for security questions, help reinforce practices in their area, and relay ground-level concerns back to whoever manages security overall.

A security champion isn't a second job. It's a role that takes one to two hours a month: attending a brief monthly sync, reviewing any new risks relevant to their function, and being the person their teammates ask when something seems off. The champion doesn't need to be technical. They need to be trusted, consistent, and willing to ask questions.

For small teams, even one champion outside of IT changes the dynamic. Security stops being "that department's thing" and becomes part of how each team operates. It's the organizational equivalent of having a first-aid kit in every room instead of only at the front desk.

Before AI / Now With AI

Before AI, social engineering had tells. Phishing emails had awkward grammar. Spoofed calls had audio artifacts. Fake invoices had formatting inconsistencies. Awareness training could teach people to spot these signals, and that training worked — imperfectly, but measurably.

Now with AI, those tells are disappearing. AI-generated phishing emails are grammatically flawless and contextually relevant. Deepfake voice cloning can replicate a CEO's voice from a few minutes of public audio — and deepfake-enabled fraud has increased 1,300% between 2022 and 2025, according to research from Sumsub's identity fraud report. AI-generated video is being used in fake meeting calls to authorize transactions. The attack surface hasn't just expanded — the quality floor has risen dramatically.

This is the shift that makes security culture non-optional. When the email looks perfect, the voice sounds right, and the video call appears legitimate, the only defense is a team that instinctively verifies through a second channel. Not because they spotted something wrong — because verification is how things are done here, always, regardless of how legitimate something looks.

Training alone can't keep up with AI-powered social engineering. By the time you've trained people to spot the current generation of attacks, the next generation is already better. Culture fills the gap that training can't: the organizational instinct to verify, question, and confirm before acting on any request that involves money, credentials, or access — no matter how convincing the request appears.

The protocols that matter now:

  • Callback verification on a known number for any financial transaction, access change, or credential request — never using the contact information provided in the request itself.
  • Multi-person authorization for wire transfers, vendor changes, and access grants above a defined threshold.
  • A culture-level norm that questioning a request is never rude, never insubordinate, and always expected — even when the request comes from the CEO.
Split view showing a traditional phishing email with typos on the left and an AI-generated flawless phishing email on the right — illustrating why security culture matters more than awareness training alone
When AI removes the tells that training taught people to spot, organizational instinct becomes the primary defense layer.

Where to Start

If you're reading this and thinking "we don't have any of this" — that's fine. Most organizations don't. Start with three things this week:

  1. Create a reporting channel. Slack channel, shared inbox, phone number. Something with zero friction that someone checks daily.
  2. Celebrate the first report publicly. When someone flags something — even if it's a false positive — thank them in front of the team. That single moment sets the tone for everything that follows.
  3. Add security to your next onboarding. Ten minutes. Three slides. "Here's how to report something. Here's what phishing looks like now. Here's our one rule: always verify."

Security culture isn't a project with a launch date. It's a set of habits that compound over time. Every report that gets acknowledged, every near-miss that gets discussed without blame, every new hire who learns the norms on day one — each one makes the next incident slightly less likely and the response to it slightly faster.

The firewall protects your network. Culture protects everything else.


Shield emblem with circuit board pattern and hot pink backlight — building security culture by Amelia S. Gagne
Security culture means the default assumption is 'this could be a threat' rather than 'nothing has happened yet.' That shift in default is worth more than any tool you can buy.

Related reading

Frequently Asked Questions

What is security culture and why does it matter?

Security culture is the set of shared values, behaviors, and norms that determine how everyone in an organization approaches security in their daily work. It matters because 60% of data breaches involve the human element, according to the Verizon DBIR. Technology controls are necessary but insufficient — people make decisions every day that either strengthen or weaken an organization's security posture, and culture shapes those decisions.

How do I build a security culture in a small business without a security team?

Start with three actions: create a low-friction reporting channel, celebrate the first person who uses it, and add a short security orientation to your onboarding process. Then designate at least one security champion outside of IT — someone trusted on the team who spends one to two hours per month reinforcing security awareness in their area. You don't need a dedicated team to have a culture. You need visible, consistent, rewarded behaviors.

Why doesn't security awareness training work on its own?

Training teaches people to recognize specific attack patterns, but AI-powered social engineering evolves faster than training cycles can update. Deepfake voice calls, AI-written phishing emails, and synthetic video meetings have removed many of the tells that training traditionally relied on. Training remains important, but it needs to be reinforced by cultural norms — like always verifying financial requests through a known second channel — that work regardless of how convincing the attack is.

What is a security champions program?

A security champions program designates one person in each team or department as a security point of contact. Champions aren't security experts — they're trusted team members who attend a monthly security sync, stay aware of current risks, and serve as the first person colleagues ask when something seems off. For small organizations, even one champion outside of IT changes the dynamic from "security is IT's job" to "security is part of how we work."

How do AI-generated deepfakes change the threat landscape for businesses?

Deepfake-enabled fraud increased 1,300% between 2022 and 2025. AI can clone a person's voice from minutes of public audio, generate convincing video for fake meeting calls, and write phishing emails that are contextually accurate and grammatically perfect. The practical impact: employees can no longer rely on "does this look/sound real?" as a security heuristic. Organizations need verification protocols — callback on known numbers, multi-person authorization for financial actions — that work even when the request appears completely legitimate.

How do I measure whether my security culture is working?

Track phishing report rates (not just click rates), time-to-report after a simulation or real incident, and the ratio of self-reported incidents to those discovered by tooling. Organizations with strong security cultures see phishing reporting rates improve by up to four times with consistent engagement. A rising report rate is a positive signal — it means people trust the process enough to use it.

Work With Us

Need help building this into your operations?

Kief Studio builds, protects, automates, and supports full-stack systems for businesses up to $50M ARR.

Newsletter

New writing, straight to your inbox.

Strategy, psychology, AI adoption, and the patterns that actually compound. No spam, easy to leave.

Subscribe