Start With a Monolith. Seriously.
42% of companies moved back to monoliths in 2026. For teams under 20 engineers, microservices solve problems you don't have yet — and create problems you don't need.

Recommending infrastructure that requires dedicated engineering to maintain — to a team with none — isn't a recommendation. It's a liability with a polished interface. The gap between capability and operability is where implementations die.
Gartner's 2024 research on technology adoption found that 47% of digital transformation initiatives fail to meet their objectives, with "insufficient skills to implement and operate new technology" cited as the primary or contributing factor in the majority of cases. Not insufficient budget. Not insufficient features. Insufficient skills. The tool could do what was needed. The team couldn't operate it at the level required to get there.
This is the variable I return to most often from the eight I outlined in The Problem with Generic Tech Recommendations. Team skills and technical savviness determine the effective ceiling of any tool — and the gap between what a platform can do and what the team can operate is where implementations go to die.
Every tool has two ceilings. The first is its capability ceiling — the upper bound of what the platform can technically accomplish when configured and operated by someone with full expertise. The second is its operability ceiling — the upper bound of what the team that actually uses it can accomplish given their skills, time, and attention.
The capability ceiling is what appears in product demos. The operability ceiling is what appears in production.
A business intelligence platform capable of real-time dashboards, predictive modeling, and custom SQL queries has a high capability ceiling. If the team using it consists of marketing managers who need monthly reports and have no SQL training, the operability ceiling is "pre-built templates with drag-and-drop filters." The distance between those two ceilings is wasted license cost — the organization pays for capabilities it will never access.
The issue isn't that the team is deficient. The team has the skills appropriate to their roles. The issue is that the tool was selected based on its capability ceiling rather than the team's operability ceiling. That selection process — evaluate the tool's maximum potential, assume the team will rise to meet it — is backwards. The tool should meet the team where they are.
Vendor demos show best-case scenarios operated by people who use the product every day. The time from deployment to the demo's level of proficiency is the adoption curve, and it's almost always longer and more resource-intensive than expected.
Research from Whatfix's 2024 Digital Adoption Report found that 73% of employees feel they don't receive adequate training on new workplace technologies. Not because training wasn't offered — but because the training provided didn't match the learning curve of the tool being adopted. Feature-rich platforms have longer adoption curves. Complex configuration requirements extend the curve further. Tools that require technical knowledge the team doesn't have create a curve that may never flatten.
I've watched this play out at Kief Studio across dozens of client engagements. A well-intentioned consultant recommends a powerful platform. The team attends the vendor's training sessions. They complete the onboarding modules. Then they return to their work and discover that the training covered the tool's interface but not their specific workflow. The gap between "I know where the buttons are" and "I can solve my actual problems with this" is where adoption stalls. This is closely related to how well the tool matches your team's working style.
Six months later, the team is using 15% of the platform's capabilities and has built workarounds for the other 85%. The workarounds are often less efficient than the old tool. But they're familiar, and familiarity wins when the alternative is spending cognitive effort on a tool that doesn't match the team's mental model.
The most consequential version of this mismatch is recommending infrastructure-level tools to teams without infrastructure-level skills.
Self-hosted platforms, container orchestration systems, custom database deployments, CI/CD pipelines, monitoring stacks — these are powerful tools. They offer control, flexibility, and cost advantages that managed alternatives can't match. They also require engineering talent to deploy, configure, secure, update, troubleshoot, and maintain on an ongoing basis.
Recommending a self-hosted platform to a team with no engineering staff isn't a recommendation. It's a liability with a polished interface. The tool will be deployed — possibly by a contractor — and then it will sit in the state it was deployed in. Security patches won't be applied. Configuration won't be updated as needs change. When something breaks, there's no one to fix it. The "powerful" tool becomes a security risk and an operational burden that the team can't resolve.
This is where the distinction between good engineering and bolted-on solutions matters most. A system that's secure by construction — that ships with safe defaults, updates automatically, and doesn't require expert intervention to maintain a baseline of security — respects the team's actual skill level. A system that assumes expert operators is built for the team the recommender imagined, not the team that actually exists.
The honest assessment starts with mapping the team, not the tool.
The most capable platform is the one the team will actually use — not the one with the longest feature list, the most impressive demo, or the highest rating in an analyst report. Use is the metric that matters. Everything else is theoretical.
Look at what the team does independently, not what they've been trained to do. If someone attended a SQL training course but doesn't write SQL in their daily work, SQL is not an operational skill for that team. The reliable indicators are: what tasks does the team complete without escalating, what tools does the team use without requesting help, and what happens when a system breaks — does the team troubleshoot or immediately contact support? The answers reveal the real skill level.
If the tool represents a strategic investment and the organization is committed to building long-term capability around it, hiring to match the tool can be the right choice — but the hiring needs to happen before or concurrent with deployment, not after. If the tool is one of many operational decisions and doesn't warrant headcount changes, select the tool to match the team. The deciding factor is whether the tool is important enough to reshape the team around.
Consistent signals include: the team uses less than 20% of the tool's features after six months, workarounds and shadow systems appear alongside the tool, support ticket volume remains high after the onboarding period, and the tool requires one or more people to spend a disproportionate amount of their time on administration rather than the work the tool was supposed to support. If the tool is creating as much work as it's saving, the complexity exceeds the team's operational capacity.
42% of companies moved back to monoliths in 2026. For teams under 20 engineers, microservices solve problems you don't have yet — and create problems you don't need.
A company spent nearly a million dollars on failing software and chose to continue. Not because the future looked promising — because the past felt too heavy to abandon.
Prevention costs $5K-$15K per year. A single incident averages $254,445. The math is a 50-to-1 ratio. The psychology explains why 47% of small businesses still allocate zero.
Work With Us
Kief Studio builds, protects, automates, and supports full-stack systems for businesses up to $50M ARR.
Newsletter
Strategy, psychology, AI adoption, and the patterns that actually compound. No spam, easy to leave.
Subscribe