How to Write a Cybersecurity Policy When You Don't Have a Security Team
Cybersecurity • Updated • 9 min read

How to Write a Cybersecurity Policy When You Don't Have a Security Team

You don't need a CISO to have a cybersecurity policy. You need a one-page document that tells your team what's expected — and what to do when something goes wrong.

Most people hear "cybersecurity policy" and picture a hundred-page binder stamped by a compliance officer. That image keeps small and mid-sized companies from ever writing one. The 2024 Hiscox Cyber Readiness Report found that 41% of small businesses experienced a cyber incident that year, yet fewer than half had any written security policy at all. The gap isn't expertise. It's a formatting problem. People think a cybersecurity policy requires a CISO to write it. It doesn't. It requires someone who knows how the business actually operates to write down what's expected and what happens when something goes wrong.

A cybersecurity policy is not a technical document. It's a behavioral one. It tells your team: here's how we handle passwords, devices, data, and incidents. Here's where the line is. Here's what to do when you're not sure. That's it. One page. Maybe two. Reviewed once a year. If your business has employees, contractors, or anyone who touches a keyboard on your behalf, you need one.

What a cybersecurity policy actually covers

Strip away the compliance jargon and a cybersecurity policy answers five questions:

  1. What are employees allowed to do with company systems? (Acceptable use)
  2. How do we protect access to accounts and data? (Authentication and password requirements)
  3. What are the rules for devices — company-owned and personal? (Device management)
  4. What does someone do when something looks wrong? (Incident reporting)
  5. How does security work outside the office? (Remote work and travel)

That's the entire scope for most businesses under $50M in revenue. You don't need a section on penetration testing methodology or network segmentation architecture. You need your team to know that reusing passwords across services is not acceptable, that lost devices get reported immediately, and that suspicious emails go to a specific person — not into the void.

The FTC's cybersecurity guidance for small businesses reinforces this. Their framework doesn't start with firewalls. It starts with training and expectations. The technology layer matters, but only after the behavioral layer is in place. A company with a $200,000 security stack and no written policy is less secure than a company with a one-page document and a team that follows it.

A single printed page of a cybersecurity policy on a clean desk next to a laptop and a cup of coffee, seen from above
A cybersecurity policy doesn't need to be long. It needs to be clear, accessible, and followed.

Template structure: one page that works

Here's the structure I recommend to every business that doesn't have a dedicated security team. Each section is two to four sentences. The goal is a document short enough that every employee actually reads it.

1. Purpose and scope

One sentence: "This policy applies to all employees, contractors, and vendors who access [Company] systems, data, or networks." That's it. Don't write a preamble about the evolving threat landscape. Everyone knows.

2. Acceptable use

State what company devices and accounts are for. State what they're not for. Be specific about personal use — most companies allow reasonable personal use during breaks, and saying so prevents ambiguity. Call out anything that's an automatic policy violation: installing unapproved software, disabling security tools, sharing credentials.

3. Authentication and passwords

Require a password manager. Require multi-factor authentication on every account that supports it. Set a minimum password length (16 characters is the current NIST 800-63B recommendation) and ban password reuse across services. Don't require periodic password changes — NIST dropped that recommendation in 2017 because it leads to weaker passwords. Require changes only after a suspected compromise.

4. Device management

Require full-disk encryption on all devices that access company data. Require automatic screen lock after five minutes of inactivity. Require OS and application updates within 72 hours of release. If employees use personal devices (BYOD), state the minimum requirements — encryption, passcode, ability to remote-wipe company data. If the company doesn't allow BYOD, say so explicitly.

5. Incident reporting

This is the most important section. Name a specific person or role to contact. Provide a phone number and email — not just a Slack channel that might go unread. State the expectation: report anything suspicious within one hour. No penalties for false alarms. Emphasize that delayed reporting is worse than a false positive. List examples of what counts as an incident: phishing emails, lost devices, unexpected MFA prompts, unauthorized access, any request that asks for credentials outside of normal workflow.

6. Remote work and travel

Require a VPN on public Wi-Fi. Prohibit accessing company systems from shared or public computers. State the expectation for physical privacy — no working on sensitive documents in coffee shops where screens are visible to the room. For international travel, add a note about notifying IT (or whoever manages access) before departure, since login locations affect security monitoring.

Close-up of a keyboard with a small padlock resting on the keys, shallow depth of field with soft ambient lighting
Device management and authentication rules protect the business at its most common entry points.

Add an AI section — this is no longer optional

In 2024, Cisco's Data Privacy Benchmark Study found that 27% of organizations had banned the use of generative AI tools at some point, and that employees were entering confidential data into public AI platforms regardless. Samsung banned ChatGPT internally after engineers uploaded proprietary source code. The problem isn't that employees are being careless. It's that AI tools are useful, people want to use them, and without a policy there's no shared understanding of what's acceptable.

Your cybersecurity policy now needs a section that addresses:

  • Approved AI tools: Name the specific tools your team is allowed to use. If the answer is "none," say so. If the answer is "only these two," list them.
  • Data restrictions: State explicitly which categories of data cannot be entered into AI tools — customer PII, financial records, proprietary code, strategic plans, anything covered by an NDA. This applies to all AI platforms, including ones embedded in tools your team already uses.
  • Output review: AI-generated content that represents the company (emails to clients, published content, reports) must be reviewed by a human before it goes out. State who is responsible for that review.
  • Account management: If the company pays for AI tools, those accounts belong to the company. Personal accounts used for work create shadow IT problems. Draw the line.

If you need a deeper framework for how to think about AI governance, I wrote a full guide on how to build an AI policy for your team that goes beyond the cybersecurity angle into workflow integration, training, and evaluation.

Before AI / Now with AI

Before AI, a cybersecurity policy was static. You wrote it, distributed it, and hoped people followed it. Enforcement was mostly reactive — you found out someone was using weak passwords after an incident, not before. Policy compliance was checked during annual reviews or audits, if it was checked at all. The document itself gathered dust in a shared drive.

Now with AI, enforcement can be continuous. Automated tools scan for policy violations in real time: weak passwords, unpatched systems, unauthorized software installations, anomalous login patterns. AI-powered email security flags phishing attempts before they reach inboxes. Endpoint detection tools use behavioral analysis to catch threats that signature-based antivirus misses. This doesn't replace the policy — it makes the policy enforceable at scale.

But AI also creates new policy surface area. Employees using unauthorized AI tools to process company data is now one of the fastest-growing shadow IT risks. Every time someone pastes a client contract into a free-tier AI chatbot, that data enters a system the company doesn't control, doesn't audit, and may not even know about. The policy has to address both sides: AI as an enforcement mechanism and AI as a new vector that the policy itself must govern.

This is why the foundational security practices still come first. AI tools don't replace the basics. They amplify whatever foundation is already there — solid or shaky.

A split desk setup showing a printed document on one side and a laptop screen displaying a security dashboard with green status indicators on the other
AI-powered tools can enforce policy continuously, but the written document still defines what "compliance" means.

How to keep it alive

A policy that nobody reads is theater. Here's how to prevent that:

  • Keep it short. If your policy exceeds two pages, cut it. Every sentence should pass the test: "Would a new employee need this to know how to behave on day one?" If not, it belongs in a procedures manual, not the policy.
  • Review annually. Put it on the calendar. Review it in January or whenever your fiscal year starts. Update it for new tools, new risks, new team structures. One meeting. Thirty minutes.
  • Make acknowledgment part of onboarding. Every new employee reads and signs the policy before they get system access. Not after. Before.
  • Test the incident reporting path. Once a quarter, verify that the phone number works, the email is monitored, and the named contact still holds that role. An incident reporting chain that ends at a departed employee's inbox is worse than no chain at all.
  • Don't punish good-faith reports. The single fastest way to kill a security culture is penalizing someone for reporting a mistake. If an employee clicks a phishing link and reports it in ten minutes, that's a success story. If they hide it for three weeks because they're afraid of consequences, that's a breach.

The goal is a document that your team treats as a reference, not a formality. The systems-before-you-need-them principle applies directly here. The time to write this is before an incident forces you to, when you have the clarity to think about it without the pressure of an active problem.

You don't need a security team to start

A dedicated CISO or security team is valuable. If your business is at the stage where you can hire one, do it. But the absence of a security team is not a reason to operate without a policy. The person who writes it should be whoever understands the day-to-day operations well enough to make the rules practical. In most small and mid-sized companies, that's the founder, the operations lead, or the office manager — not a security specialist.

Write the one-page version. Distribute it. Review it in twelve months. That single action puts your business ahead of the majority of companies your size. The policy doesn't need to be perfect. It needs to exist, be readable, and be followed.


Spider web with hot pink bioluminescent dew drops in dark — cybersecurity policy interconnections by Amelia Gagne
A cybersecurity policy for a small team doesn't need to look like an enterprise document. It needs to answer three questions: what's protected, who's responsible, and what happens when something goes wrong.

Related reading

Frequently asked questions

How long should a cybersecurity policy be?

One to two pages for most businesses without a dedicated security team. The goal is a document that every employee actually reads. If you need detailed procedures — step-by-step instructions for specific tools, vendor management rules, or compliance checklists — put those in a separate procedures manual and reference it from the policy. The policy sets expectations. The procedures manual explains how to meet them.

Do I need a lawyer to write a cybersecurity policy?

Not for the initial version. Write the operational document yourself — you understand your business, your tools, and your team better than outside counsel does. If your industry has specific compliance requirements (healthcare, finance, government contracting), have a lawyer review the final version to ensure it meets regulatory language requirements. But the first draft should come from someone who knows how work actually gets done.

What's the difference between a cybersecurity policy and a compliance framework?

A cybersecurity policy is an internal document that tells your team what's expected. A compliance framework (SOC 2, HIPAA, PCI-DSS, ISO 27001) is an external standard that tells auditors what you've implemented. Your policy is one input into compliance, but compliance frameworks cover far more ground — risk assessments, vendor management, access controls, logging, and ongoing monitoring. Start with the policy. It makes compliance work easier later.

Should I include consequences for policy violations?

Yes, briefly. State that violations may result in disciplinary action up to and including termination, consistent with your existing HR policies. Don't create a separate punishment matrix inside the cybersecurity policy. The key is distinguishing between negligence (repeatedly ignoring MFA requirements after training) and honest mistakes (clicking a phishing link and immediately reporting it). The first is a performance issue. The second is exactly the behavior you want.

How do I handle a cybersecurity policy for contractors and vendors?

Include them in the scope statement. Any person or entity that accesses your systems, data, or networks should be bound by the same expectations. For vendors, the policy requirements can be referenced in your service agreements. For contractors, include the policy in onboarding alongside your NDA and independent contractor agreement. If a contractor refuses to follow your password or device management requirements, that's information you need before they have access — not after.

What should I do if an employee violates the AI section of the policy?

Treat it as a training opportunity first. Most AI policy violations aren't malicious — they're someone trying to do their job faster with a tool they didn't realize was off-limits. Address it directly, document it, and update training if the violation reveals a gap in clarity. If the violation involved entering sensitive data into an unauthorized platform, you also need to assess the exposure: what data was shared, with which platform, and what that platform's data retention and training policies are. That assessment is the priority, not the disciplinary conversation.

Work With Us

Need help building this into your operations?

Kief Studio builds, protects, automates, and supports full-stack systems for businesses up to $50M ARR.

Newsletter

New writing, straight to your inbox.

Strategy, psychology, AI adoption, and the patterns that actually compound. No spam, easy to leave.

Subscribe