What Happens When You Get Hacked: A Realistic Timeline
Cybersecurity • Updated • 8 min read

What Happens When You Get Hacked: A Realistic Timeline

Most businesses imagine a breach as a dramatic event. The reality is quieter, slower, and more expensive than the movies suggest. Here's what actually happens, hour by hour.

Most people picture a hack as a single dramatic moment — alarms blaring, skulls on screens, someone in a hoodie typing furiously. The reality is closer to a slow leak in a pipe behind drywall. By the time you notice the damage, the water has been running for days.

The IBM Cost of a Data Breach Report (2024) puts the global average cost of a breach at $4.88 million. For small and mid-sized businesses, the numbers scale down — but not proportionally. When a 30-person company loses $200,000 in incident response, legal fees, and downtime, that's existential in a way that $4.88 million isn't for a Fortune 500.

Understanding what actually happens during a breach — hour by hour, decision by decision — is the first step toward building the kind of response that keeps a bad week from becoming a shutdown. Here's a realistic timeline based on how breaches typically unfold.

Hour 0: initial access

Most breaches don't start with a sophisticated exploit. They start with a credential. A phishing email that looked legitimate enough to click. A password reused from a compromised site. An exposed VPN login with no multi-factor authentication. An employee responding to a text from someone impersonating IT.

According to Verizon's 2024 Data Breach Investigations Report, stolen credentials and phishing remain the top initial access vectors, accounting for the majority of breaches. The attacker doesn't need to be exceptional. They need one valid credential and one unprotected entry point.

At this moment, nothing looks wrong inside the business. No alerts fire. No one notices. The attacker has a foothold — usually a single compromised workstation or email account — and they're not in a hurry.

A single unlocked padlock resting on a dark keyboard, one key faintly illuminated — representing the moment of initial access through a compromised credential
Most breaches begin quietly — a single credential, a single click, and no alarms.

Hours 1-24: reconnaissance and privilege escalation

Once inside, the attacker's first priority isn't stealing data. It's mapping the environment. What operating systems are running? What does the network topology look like? Where are the file shares, the databases, the backups? Who has admin access?

This phase is called lateral movement, and it's the reason dwell time matters so much. Mandiant's M-Trends 2024 report found that median dwell time — the gap between initial compromise and detection — sits at 10 days globally. In many SMB environments without dedicated security monitoring, it's significantly longer.

During these first 24 hours, the attacker is typically:

  • Harvesting additional credentials — pulling cached passwords from the compromised machine, checking for password files or credential managers
  • Escalating privileges — moving from a standard user account to an administrator or domain admin account
  • Identifying high-value targets — financial databases, customer records, intellectual property, email archives of executives
  • Establishing persistence — creating backdoor accounts, installing remote access tools, or scheduling tasks that maintain their access even if the original entry point is closed

The business is still operating normally. Email works. Systems are up. If anyone noticed a brief login from an unfamiliar location, it was dismissed as a glitch or forgotten by lunch.

Days 2-7: staging and exfiltration

By day two, the attacker typically has admin-level access and a clear picture of what's worth taking. Now the real damage begins — but it still looks like nothing from the outside.

In a data theft scenario, files are being copied to a staging area and exfiltrated in small batches to avoid triggering any bandwidth alerts. Customer databases, financial records, employee data, proprietary documents — transferred out steadily, often encrypted to look like normal HTTPS traffic.

In a ransomware scenario, the attacker is doing something even more methodical: locating and disabling backups. This is the step that separates a recoverable incident from a catastrophic one. Sophos's State of Ransomware 2024 report found that attackers attempted to compromise backups in 94% of ransomware attacks. When they succeeded, the median ransom demand doubled.

They're also deploying the ransomware payload to every reachable machine — but not executing it yet. The encryption is staged and ready. Scheduled to trigger simultaneously across the network, often at 2 AM on a Friday or before a holiday weekend, when the fewest people are watching.

Day 7+: detonation or disclosure

This is the moment most people think of as "the hack." But by the time it becomes visible, the attacker has had a week or more of uncontested access.

In a ransomware attack, the payload executes. Every staged machine encrypts simultaneously. File extensions change. Ransom notes appear on every desktop. Systems go down. Phones start ringing. The business stops.

In a data theft scenario, the disclosure might come differently: a notification from law enforcement that your customer data appeared on a dark web marketplace. Or an extortion email threatening to release the data unless you pay. Or — worst case — you find out because your customers tell you their information has been misused.

The statistics at this stage are grim for smaller businesses. Research from the University of New Hampshire (2024) confirms that approximately 75% of small businesses report they could not continue operating if hit with ransomware. The National Cyber Security Alliance has reported that roughly 60% of small businesses that experience a significant cyberattack close within six months.

A dimly lit server room with blinking red indicator lights on a rack, cables neatly bundled — representing the quiet infrastructure where ransomware stages before detonation
By the time ransomware executes, the attacker has already mapped, staged, and disabled your recovery options.

Weeks 2-8: response, forensics, and legal exposure

Once a breach is detected, the timeline doesn't accelerate toward recovery. It accelerates toward cost.

Forensics. Before you can fix anything, you need to understand what happened. A forensics team has to determine the initial access vector, what was accessed, what was taken, and whether the attacker still has access. This alone can take weeks and cost tens of thousands of dollars — assuming you can engage a firm quickly enough. Demand for incident response firms spikes after major campaigns, and wait times can stretch to days.

Legal notification. Depending on your jurisdiction and the type of data compromised, you may be legally required to notify affected individuals, regulatory bodies, or both. In the U.S., all 50 states have breach notification laws with varying timelines — some require notification within 30 days. GDPR requires it within 72 hours. Missing these deadlines adds regulatory penalties on top of everything else.

Insurance. If you have cyber insurance (and many SMBs don't), this is where you discover what your policy actually covers. Policies vary dramatically. Some cover ransom payments. Some cover forensics but not business interruption. Some require you to have met specific security standards — multi-factor authentication, endpoint detection, encrypted backups — and may deny claims if you hadn't.

Business continuity. This is the part that breaks companies. Even after the immediate threat is contained, rebuilding takes time. Restoring from backups (if they exist and weren't compromised). Rebuilding compromised machines. Reconfiguring access controls. Verifying that every system is clean before reconnecting it to the network. For a business with limited IT resources, this can mean weeks of partial or no operations.

Before AI / now with AI

Every phase of this timeline is being compressed by AI — on the attacker's side.

Before AI: An attacker who gained initial access would spend hours or days manually exploring the network. They'd run scripts, read configuration files, test credentials one at a time, and slowly build a map of the environment. Lateral movement was labor-intensive. A skilled attacker might compromise a small network in a week. A less skilled one might take longer or make enough noise to get caught.

Now with AI: AI-powered attack tools can scan a network and identify high-value targets — admin accounts, database servers, backup systems — in minutes rather than days. Machine learning models trained on thousands of network configurations can predict where the most valuable data lives and which escalation paths are most likely to succeed.

AI-generated social engineering is also changing the persistence phase. Once inside a network, attackers can use AI to craft contextually accurate internal emails — mimicking real employees' writing styles, referencing actual ongoing projects — to maintain access and expand it. A phishing email sent from a compromised internal account, written in the exact tone of the person whose account was compromised, is nearly indistinguishable from legitimate communication.

The defensive side is evolving too. AI-driven endpoint detection and network monitoring can flag anomalous behavior — unusual login times, lateral movement patterns, bulk file access — faster than any human analyst. But these tools require deployment and configuration before the breach. They don't help retroactively.

The net effect: the window between initial access and serious damage is shrinking. The 10-day median dwell time will compress. Organizations that relied on slow attacker methodology as an implicit safety margin are losing that margin.

What actually reduces the damage

This timeline isn't inevitable. Every transition point — from initial access to reconnaissance, from reconnaissance to escalation, from staging to detonation — is a place where the right preparation creates friction that slows or stops the attack.

  • Multi-factor authentication stops the initial credential from being useful on its own
  • Network segmentation limits how far an attacker can move after getting in
  • Monitored and tested offline backups make ransomware a recoverable event instead of a business-ending one
  • Endpoint detection and response (EDR) shortens dwell time from days to hours
  • An incident response plan — written, tested, and known to the team — eliminates the panic-driven decision-making that makes the first 48 hours after detection so costly

None of these are exotic or enterprise-only. They're the foundational steps every business should have in place before a breach makes them urgent.

Mycelium network spreading across dark soil in macro detail, branching filaments connecting multiple nodes — representing layered defensive infrastructure that detects threats early
Layered defenses work like a network — each connection point creates friction that slows an attacker's progress.

The breach timeline is predictable. The preparation gap is what makes it devastating. If reading this prompted you to check whether your backups are actually tested, whether MFA is on everything, or whether anyone on your team knows who to call first — that's the right response. The quiet week before a breach becomes visible is the week that decides the outcome.


Shattered glass with hot pink light refracting through cracks — breach timeline by Amelia S. Gagne
A breach isn't a single event — it's weeks of discovery, remediation, notification, and legal exposure. The average ransomware-related downtime is 24 days. For thin-margin businesses, 24 days offline isn't a setback; it's a closing.
Shield with circuit board pattern and hot pink backlight — security culture by Amelia S. Gagne
IBM data shows that organizations with a tested incident response plan reduce breach costs by $232,007. The plan costs nothing but an afternoon. The savings are a quarter million dollars.

Related reading

Frequently Asked Questions

How long does it take to detect a breach?

The global median is about 10 days according to Mandiant's 2024 data, but that number varies widely. Organizations with active monitoring and endpoint detection can reduce it to hours. Businesses without dedicated security monitoring often don't discover breaches for weeks or months — sometimes only when notified by law enforcement or a customer. The gap between compromise and detection is the single biggest factor in how much damage a breach causes.

Does cyber insurance cover everything in a breach?

Rarely. Policies vary significantly in what they cover — some include ransom payments, forensics, legal costs, and business interruption, while others cover only a subset. Many policies also have security baseline requirements (MFA, encrypted backups, EDR) and may deny claims if those weren't in place at the time of the breach. Read your policy now, before you need it. If you don't have cyber insurance, that's a conversation worth having with a broker who specializes in it.

Can a small business actually recover from a ransomware attack?

Yes — but only with preparation. The businesses that recover are the ones with tested offline backups, an incident response plan, and enough segmentation that the blast radius doesn't reach everything. The 60% closure statistic reflects what happens without those safeguards. Recovery is expensive and slow even in the best case (weeks, not days), but it's survivable if the foundations are in place before the attack.

Should we pay the ransom?

Law enforcement agencies including the FBI consistently advise against paying, because payment funds further attacks and doesn't guarantee data recovery. In practice, some organizations pay because they have no viable backup and the alternative is permanent closure. This is the worst possible position to negotiate from — which is why offline, tested backups are the most important item on the security foundations list. If you have working backups, the ransom question becomes irrelevant.

What should we do right now to prepare?

Three things you can start today: enable multi-factor authentication on every account that supports it, verify that your backups are stored offline (not just in the cloud connected to your network) and test a restore, and write down who your team should call first if systems go down — your IT provider, your insurance carrier, and legal counsel. These three actions address the most common failure points in the timeline above. For a more complete checklist, see five cybersecurity things every business should do first.

Work With Us

Need help building this into your operations?

Kief Studio builds, protects, automates, and supports full-stack systems for businesses up to $50M ARR.

Newsletter

New writing, straight to your inbox.

Strategy, psychology, AI adoption, and the patterns that actually compound. No spam, easy to leave.

Subscribe