Start With a Monolith. Seriously.
42% of companies moved back to monoliths in 2026. For teams under 20 engineers, microservices solve problems you don't have yet — and create problems you don't need.

The average mid-market company runs 187 SaaS applications. Every new tool has to connect to the ones already there — and the most capable platform in a category is often the worst fit for a specific stack.
The average mid-market company runs 187 SaaS applications, according to Productiv's 2023 State of SaaS report. Enterprise organizations average over 470. Each of those tools generates data, consumes data, or both — and each connection between them is a dependency that any new addition has to respect or disrupt.
This is the part of technology selection that most recommendation lists skip entirely. A "best CRM" list doesn't know what your marketing automation platform is, what your accounting system expects, or whether your customer data flows through a warehouse or lives in spreadsheets someone emails around on Fridays. But those facts determine whether any CRM will actually work for you more than any feature comparison ever could.
I wrote about this as one of eight variables that shape whether a recommendation holds in The Problem with Generic Tech Recommendations. Of all eight, this one — what's already running — is the variable people underestimate most consistently.
When a vendor says their platform "integrates with over 300 tools," what they usually mean is that they have pre-built connectors for 300 tools, some subset of which are actively maintained, and a smaller subset of which actually transfer the data fields your workflows depend on.
There's a meaningful difference between "we integrate with your accounting platform" and "we sync the specific custom fields your accounting team uses to reconcile invoices." The first is a marketing claim. The second is what determines whether the integration saves time or creates a second manual process.
MuleSoft's 2023 Connectivity Benchmark Report found that integration challenges are the number one barrier to digital transformation, cited by 80% of IT leaders surveyed. The problem isn't that integrations don't exist — it's that the integrations that exist don't match the specific data flows a given organization depends on.
Every tech stack is, functionally, a custom system. Even if every individual tool is off-the-shelf, the configuration, the data mappings, the automation rules, and the workarounds that connect them are unique to that organization. Adding a new tool means connecting it to that specific custom system — not to the generic version the vendor tested against.
This is the part that frustrates people. The platform that wins every feature comparison, tops every analyst quadrant, and gets recommended in every conference talk may be genuinely excellent — and genuinely wrong for your stack.
I've seen this pattern repeatedly at Kief Studio. A client adopts a project management platform because it's the market leader. It has more features than any competitor. Reviews are glowing. But it doesn't have a native connector to their industry-specific compliance tool, which means every project status update requires someone to manually copy information between systems. Three months in, the team has reverted to the old tool — not because it was better, but because it actually connected to what they needed.
The feature comparison never surfaced this. It couldn't. Feature comparisons evaluate tools in isolation. Your stack doesn't run in isolation.
A Gartner survey from 2024 found that 56% of organizations reported significant unplanned integration costs after adopting a new platform. Not because the platform was bad. Because the platform assumed a stack configuration that didn't match what was actually running.
Before evaluating any new tool, you need a clear picture of what it has to connect to — and how. This isn't a theoretical exercise. It's a practical inventory:
This inventory isn't exciting. It doesn't show up in product demos. But it's the difference between a tool that fits and a tool that creates what I call integration debt — ongoing engineering cost (a form of total cost of ownership that rarely appears in the initial budget) to keep a connection working that should have been native.
Integration debt works like financial debt. A small amount is manageable. It compounds when you ignore it.
Every custom connector, every manual data transfer, every workaround script that someone wrote to bridge two systems that don't natively talk to each other — that's a maintenance obligation. When either system updates, the bridge may break. When the team member who built the workaround leaves, the institutional knowledge of how it works often leaves with them.
A 2023 study by the Consortium for Information and Software Quality estimated that the cost of poor software quality in the US reached $2.41 trillion, with a significant portion attributed to technical debt including integration failures. At the organizational level, it's concrete: it's the engineer who spends every Monday morning fixing the sync between your CRM and your invoicing system instead of building something that moves the business forward.
The best way to manage integration debt is to not take it on unnecessarily. That means evaluating new tools against your actual stack — not against a feature checklist, not against a competitor's stack, and not against a recommendation list written by someone who's never seen your systems.
Request a technical integration review, not a sales demo. Ask the vendor specifically about the systems you need to connect and the data fields you need to transfer. If the vendor can only demonstrate generic integrations and can't speak to your specific configuration, that's a signal — the integration you need may not exist at the depth you require. Trial periods that include integration testing are more valuable than feature evaluations.
The direct cost is the engineering time to build, maintain, and fix custom connectors. The indirect cost is higher: data inconsistency across systems, manual workarounds that consume staff time, delayed decisions because information isn't where it needs to be, and the compounding effect of maintaining bridges that break every time a connected system updates. MuleSoft's research suggests organizations spend an average of $3.5 million annually on integration-related challenges.
Not always — but integration fit should carry more weight than most organizations give it. A tool with fewer features that connects cleanly to your existing systems will typically deliver more value than a feature-rich tool that requires custom engineering to communicate with everything around it. The exception is when the new tool is so transformative that it justifies rebuilding adjacent workflows. That happens, but less often than vendors suggest.
42% of companies moved back to monoliths in 2026. For teams under 20 engineers, microservices solve problems you don't have yet — and create problems you don't need.
A company spent nearly a million dollars on failing software and chose to continue. Not because the future looked promising — because the past felt too heavy to abandon.
Prevention costs $5K-$15K per year. A single incident averages $254,445. The math is a 50-to-1 ratio. The psychology explains why 47% of small businesses still allocate zero.
Work With Us
Kief Studio builds, protects, automates, and supports full-stack systems for businesses up to $50M ARR.
Newsletter
Strategy, psychology, AI adoption, and the patterns that actually compound. No spam, easy to leave.
Subscribe