Start With a Monolith. Seriously.
42% of companies moved back to monoliths in 2026. For teams under 20 engineers, microservices solve problems you don't have yet — and create problems you don't need.

A company planning to double headcount in eighteen months has materially different needs than one in a consolidation phase. Recommendations that don't account for trajectory create an artificial horizon.
According to the U.S. Bureau of Labor Statistics, roughly 20% of new businesses fail within the first year and approximately 50% within five years. The ones that survive tend to pass through distinct phases — startup, growth, scaling, consolidation, and sometimes reinvention — each with different operational demands. A technology stack selected for one phase doesn't automatically serve the next. And recommendations that evaluate a company at a single point in time, without asking where it's headed, create what I think of as an artificial horizon: a decision that looks correct today but becomes a constraint tomorrow.
I identified this as one of eight variables in The Problem with Generic Tech Recommendations. Future direction and organizational roadmap is the variable that separates recommendations from advice — because a recommendation evaluates the present, and advice accounts for trajectory.
A company planning to double headcount in eighteen months has materially different technology needs than one in a consolidation phase. The difference isn't subtle — it shapes every layer of the stack.
Growth-phase organizations need tools that scale with headcount: per-seat licensing models that don't become punitive at higher counts, platforms that support increasingly complex permission structures, infrastructure that can absorb more users without architectural changes. They need onboarding flows that can handle ten new people per month, not one per quarter. They need administrative interfaces that support delegation — because the founder who configured everything personally at 15 employees can't be the configuration bottleneck at 60.
Consolidation-phase organizations need the opposite. They need tools that reduce operational complexity: platforms that can absorb the functions of multiple point solutions, administrative models that require fewer dedicated operators, contracts that flex downward without punitive minimums. They're optimizing for efficiency, not expansion.
A tool selected during growth that charges per-seat with annual minimums becomes a liability during consolidation if the headcount contracts. A tool selected during consolidation that caps at 50 users becomes a bottleneck when the company enters its next growth cycle. Neither tool is wrong — but both are wrong for the phase they weren't selected for.
Bessemer Venture Partners published research in their State of the Cloud report showing that most SaaS companies hit operational complexity inflection points at specific revenue and headcount thresholds. The technology equivalent is well-established in engineering: systems that work at one scale often require architectural changes to work at the next.
The patterns are predictable. A startup uses a lightweight project management tool — it's simple, fast, and the five-person team doesn't need role-based permissions or audit trails. At 25 people, they need basic structure: teams, projects, permissions. The lightweight tool stretches. At 75 people, they need compliance-grade audit trails, multi-department workflows, and reporting that spans the organization. The lightweight tool breaks — not because it degraded, but because the organization grew past its design assumptions.
This isn't unique to project management. Every tool category has a scale range where it functions well and thresholds beyond which it doesn't. A CRM that handles 500 contacts elegantly may struggle at 50,000. And a tool selected based on a generic comparison won't surface that scaling limit. A communication platform that works for a single-office team may fragment a multi-location organization. An accounting tool built for sole proprietors will hit compliance limitations the moment the company needs multi-entity consolidation.
The question isn't whether the tool works now. The question is whether it will still work at the scale you're planning to reach — and what happens if you have to replace it at that scale.
Every technology decision includes an implicit assumption about how long the tool will be in place. A tool selected for the current state, without regard for the roadmap, tends to have a shorter effective lifespan. When it's replaced, the migration costs are real and often significant.
Data migration alone — extracting data from the old system, transforming it to match the new system's schema, validating the transfer, and reconciling discrepancies — can run weeks to months depending on volume and complexity. Blissfully's 2023 SaaS Management Report found that the average organization spends $200,000 to $500,000 on major platform migrations — a total cost of ownership factor, factoring in direct costs and productivity loss during the transition period.
But the direct costs are only part of it. There's the disruption to the team, the retraining, the temporary productivity drop, the risk that institutional knowledge encoded in the old system's configuration doesn't survive the transfer. There's the integration work — every system connected to the old tool needs to be reconnected to the new one. And there's the opportunity cost: every hour spent on migration is an hour not spent on work that moves the business forward.
A tool that costs 20% more annually but serves the organization through its next growth phase may be dramatically cheaper than a tool that saves 20% now and requires a $300,000 migration in two years.
The evaluation framework is straightforward once you've articulated where the organization is headed:
None of these questions have perfect answers — planning is inherently uncertain. But approximate answers to the right questions produce better technology decisions than precise answers to the wrong ones. A tool selected with a three-year view, even an imperfect one, will outperform a tool selected with a today-only view almost every time.
At Kief Studio, we build with this in mind. Infrastructure designed to be outgrown is infrastructure that will need to be replaced, and building systems before you need them is how you avoid that trap, and replacing infrastructure is expensive. Building with the next phase in view isn't over-engineering — it's accounting for the trajectory that the business is already on.
Three years is the practical minimum for most technology decisions. Many enterprise tools take six to twelve months to fully implement, which means a tool selected without a multi-year view may already be inadequate by the time it's fully operational. The precision of the plan matters less than the direction — knowing whether you're scaling up, consolidating, or entering new markets is enough to filter out tools that won't survive the transition.
Uncertainty is normal. When direction is genuinely unclear, favor tools with flexible pricing models (scale up and down without penalty), broad feature sets (cover multiple potential use cases), strong APIs (allow integration with whatever comes next), and data portability (make migration easier if needed). Avoid tools with rigid contracts, proprietary data formats, or architectures that lock you into a specific scale.
The signals are consistent: workarounds multiply, the team spends more time managing the tool than using it, reporting gaps require manual data compilation, and new hires express surprise at the limitations. The most reliable signal is when the administrative burden of the tool grows faster than the team. If the tool required one administrator at 20 people and now requires three at 50, it's scaling linearly with headcount instead of absorbing the growth — and the next doubling will make the problem acute.
42% of companies moved back to monoliths in 2026. For teams under 20 engineers, microservices solve problems you don't have yet — and create problems you don't need.
A company spent nearly a million dollars on failing software and chose to continue. Not because the future looked promising — because the past felt too heavy to abandon.
Prevention costs $5K-$15K per year. A single incident averages $254,445. The math is a 50-to-1 ratio. The psychology explains why 47% of small businesses still allocate zero.
Work With Us
Kief Studio builds, protects, automates, and supports full-stack systems for businesses up to $50M ARR.
Newsletter
Strategy, psychology, AI adoption, and the patterns that actually compound. No spam, easy to leave.
Subscribe