Start With a Monolith. Seriously.
42% of companies moved back to monoliths in 2026. For teams under 20 engineers, microservices solve problems you don't have yet — and create problems you don't need.

A recommendation that assumes specific infrastructure is available everywhere may not survive contact with the actual operating environment. Hardware constraints are real, common, and routinely ignored during software evaluation.
Cisco's 2025 Global Networking Trends Report found that 40% of enterprise network infrastructure is aging or obsolete. Not "could be upgraded for better performance" — aging to the point where it constrains what software the organization can run. That statistic doesn't account for field environments, multi-site operations with inconsistent connectivity, or the reality that many teams are running hardware purchased under a budget cycle that predated the software they're now being asked to adopt.
In The Problem with Generic Tech Recommendations, I listed equipment and existing hardware as one of eight variables that should inform any technology recommendation. It's the one that surfaces last in most evaluations — usually after the contract is signed, during implementation, when someone discovers that the software requires bandwidth, processing power, or device capabilities that parts of the organization simply don't have.
Software demos run on current-generation hardware, fast and stable internet connections, and clean test data. This is reasonable for demonstrating capability. It's misleading for evaluating fit.
A warehouse management system that performs beautifully on a demo laptop with a fiber connection behaves differently on a five-year-old rugged tablet over a cellular hotspot in a steel-framed building. A field service application that loads in two seconds on the vendor's conference room Wi-Fi takes forty-five seconds when a technician is on-site at a rural facility with intermittent LTE coverage. A collaboration platform that streams video seamlessly in the vendor's office becomes unusable at a location where the shared internet connection is already saturated by operational systems.
These aren't edge cases. For companies operating across multiple physical locations — retail chains, healthcare networks, manufacturing operations with distributed facilities, field service organizations — the worst-performing environment is the one that determines whether the software is viable. This is why recommendations that skip discovery fail so consistently — they're tested against the demo, not the deployment. The demo shows the ceiling. The deployment reveals the floor.
The constraints are more varied than most evaluations account for.
The goal isn't to reject any software that doesn't run on the oldest device in the fleet. It's to make the hardware constraint visible before the purchasing decision, so the total cost and timeline include what's actually required to make the software work.
Hardware constraints are unglamorous. They don't appear in feature comparison matrices. Vendors don't volunteer that their platform performs poorly on aging equipment, because the comparison framework rewards capability, not compatibility. The evaluation process — demos, feature comparisons, reference checks — is designed to surface what the software can do, not where it breaks.
The result is a consistent pattern: organizations select software based on capability, discover hardware constraints during implementation — a pattern that also drives up total cost of ownership significantly, absorb unbudgeted hardware costs to make the software work, and either succeed at a higher total cost than planned or deploy partially, with the locations that needed the software most getting it last because their infrastructure needed the most work.
This is preventable. Not by choosing less capable software, but by making the deployment environment part of the evaluation from the beginning. The hardware your team actually uses is a constraint that the recommendation has to survive contact with — and a recommendation that ignores it isn't a recommendation for your organization. It's a recommendation for a hypothetical one.
Start by documenting the actual hardware in use across all deployment locations — device models, OS versions, available memory, and connection speeds. Then request a trial or proof-of-concept deployment on the oldest and least capable devices in your fleet, not just the newest ones. If the vendor's minimum requirements exclude a significant portion of your equipment, factor the hardware refresh cost into the total cost of ownership before comparing options.
Sometimes, but the sequencing matters. Upgrading hardware to meet the requirements of a specific software product locks you into that product's infrastructure assumptions. A better approach is to assess your hardware environment independently, plan upgrades based on operational needs and lifecycle, and then evaluate software within the constraints of the realistic upgrade timeline. Hardware refresh cycles and software adoption cycles don't have to be coupled, and coupling them often creates budget pressure that compresses evaluation time.
Prioritize software that supports offline or low-bandwidth operation — the ability to queue data locally and sync when connectivity returns. This isn't a niche requirement. Field service, logistics, healthcare, retail, agriculture, and construction all routinely operate in environments where connectivity is intermittent. Any software evaluation for these contexts should include explicit testing under degraded network conditions, not just confirmation that an "offline mode" exists in the feature list.
42% of companies moved back to monoliths in 2026. For teams under 20 engineers, microservices solve problems you don't have yet — and create problems you don't need.
A company spent nearly a million dollars on failing software and chose to continue. Not because the future looked promising — because the past felt too heavy to abandon.
Prevention costs $5K-$15K per year. A single incident averages $254,445. The math is a 50-to-1 ratio. The psychology explains why 47% of small businesses still allocate zero.
Work With Us
Kief Studio builds, protects, automates, and supports full-stack systems for businesses up to $50M ARR.
Newsletter
Strategy, psychology, AI adoption, and the patterns that actually compound. No spam, easy to leave.
Subscribe