Neural pathway network splitting into diverging decision paths — Amelia S. Gagne on cognitive bias in technology decisions
psychology • Updated • 5 min read

The Bias You Don't See Is the One Making Your Technology Decisions

AI doesn't just reflect training data bias — it amplifies the biases of the people using it. Five cognitive shortcuts that derail technology decisions, and the structured processes that counteract them.

Harvard Business Review published a finding in January 2026 that stopped me mid-scroll: AI doesn't just reflect the biases in its training data — it amplifies the biases of the people using it. The tools are neutral. The operators aren't. And the gap between those two facts is where most technology decisions go wrong.

I studied this problem from the behavioral science side before I ever touched a line of code for a client. Five certifications in psychology, consumer behavior, and NLP later, the pattern is always the same: the decision to adopt, reject, or ignore a technology is rarely about the technology. It's about the cognitive shortcuts the decision-maker is running — and whether anyone in the room knows how to spot them.

Extreme macro of aged keyboard keys with dust — Amelia S. Gagne on loss aversion and clinging to outdated technology decisions
Kahneman and Tversky (1979) demonstrated that people feel loss twice as intensely as equivalent gain. In technology, this means teams tolerate broken tools rather than face the perceived loss of switching.

Loss aversion is the silent budget killer

Kahneman and Tversky's prospect theory (1979) demonstrated that people feel the pain of a loss roughly twice as intensely as the pleasure of an equivalent gain. In technology decisions, this shows up as the sunk cost trap — continuing to pay for software that doesn't work because switching feels like admitting the original purchase was a mistake.

A 2024 study from Flevy found that loss aversion is the single most predictive cognitive bias in enterprise technology adoption decisions. Teams will tolerate a tool that costs them 15 hours a week in workarounds rather than face the perceived "loss" of migrating to something better. The math never supports this. The psychology always does.

This is why the problem usually isn't you — it's the invisible weight of decisions made before you arrived. The software was chosen by someone who's no longer there, and nobody wants to be the person who says the emperor has no clothes.

Bioluminescent jellyfish tendrils flowing in dark water — the invisible pull of groupthink in technology adoption
The Behavioural Insights Team found that business leaders often respond to cognitive bias concepts with nodding and passive agreement — but no follow-up. The bandwagon effect operates the same way in technology adoption.

Confirmation bias turns vendor demos into theater

Once a decision-maker has a preference — even a subconscious one — they filter every piece of incoming information to confirm it. Columbia University's risk management research (November 2025) found that traditional enterprise risk frameworks "assume rational decision-making, even though real-world outcomes are shaped by cognitive heuristics and behavioral tendencies."

In practice, this means the vendor demo is already decided before it starts. The buyer liked the sales rep. The logo looked enterprise-grade. A competitor uses it. Everything after that is confirmation theater — the team watches a curated walkthrough and nods because the comparison already happened in their head, not on a spreadsheet.

The fix isn't more demos. It's structured evaluation criteria written before the first call — something I covered in depth in how to actually evaluate a technology vendor. When the rubric exists before the pitch, confirmation bias has less room to operate.

The bandwagon effect scales with your board deck

The Behavioural Insights Team (BIT) published research showing that business leaders often respond to behavioral science concepts with "nodding and passive agreement with a lack of follow-up." The same pattern applies to technology trends. A founder reads that their competitor adopted a specific platform. The board asks about AI strategy. Suddenly the urgency is artificial, and the decision is about optics rather than operations.

This is the bandwagon effect at enterprise scale. It's why AI suggestions sometimes make products worse — not because the technology is bad, but because the adoption was driven by social proof rather than operational need. The question that cuts through it is simple: "What specific problem does this solve that we measured before we started looking for solutions?"

Brain wave EEG pattern in hot pink on dark background — Amelia S. Gagne on how neural shortcuts shape technology decisions
An NBER 2026 working paper found that LLMs exhibit systematic behavioral biases in economic decisions — and that prompting them to 'make rational decisions' measurably reduces those biases. The same principle applies to human operators.

Status quo bias protects bad architecture

Samuelson and Zeckhauser's 1988 research on default bias demonstrated that people overwhelmingly accept the default option — even when alternatives are objectively better. In technology, the default is whatever's already running. And the longer it runs, the harder it becomes to change, regardless of whether it's the right choice.

I've seen this with tech stack decisions dozens of times. The stack isn't chosen — it's inherited. And inherited architecture carries the biases of every person who touched it: the developer who preferred a specific framework, the manager who liked a particular vendor's pricing model, the CTO who left two years ago but whose preferences are still running in production.

Status quo bias is the reason technology fragmentation compounds silently. Nobody chose to have four overlapping tools. They just never chose to consolidate, and the default won.

Data dashboard with hot pink UI accents on dark monitor — Amelia S. Gagne on structured, data-driven technology decision-making
Columbia University researchers recommend premortems, devil's advocates with authority, and decision journals as the three most effective interventions against cognitive bias in enterprise risk management.

How to design decisions that resist bias

Columbia's risk management researchers recommend three practical interventions that I've adapted for technology decisions at Kief Studio:

Premortems. Before committing to a technology decision, ask the team: "It's six months from now and this failed. What went wrong?" This surfaces objections that status quo bias and groupthink would normally suppress. It's not pessimism — it's structured dissent.

Devil's advocates with authority. Assign someone the explicit role of arguing against the decision. Not as theater — with actual authority to delay or block. The BIT research found that Operations and IT people are often more receptive to this than leadership, which makes them ideal candidates.

Decision journals. Write down what you decided, why, and what you expected to happen. Review it in 90 days. Auditing what AI is actually doing in your business follows the same principle — you can't improve what you don't measure, and you can't measure what you didn't write down.

Related reading

Frequently Asked Questions

Can cognitive bias be eliminated from technology decisions?

No. Columbia University's 2025 research is explicit on this point: "You can't eliminate bias — you can only design processes that help counteract it." The goal isn't bias-free decisions. It's decision architectures that make biases visible and accountable — premortems, structured evaluation criteria, and decision journals that create a paper trail you can audit.

How does AI amplify human cognitive biases?

Harvard Business Review's January 2026 analysis found that AI systems don't just inherit training data bias — they amplify the biases of their operators. When a user asks leading questions, the AI confirms them. When a team uses AI to validate a decision they've already made, the output reinforces the confirmation bias. The tool is neutral; the prompts aren't.

What's the most expensive cognitive bias in technology adoption?

Loss aversion. Teams will spend more maintaining a failing system than they would switching to a better one, because the perceived "loss" of admitting the original decision was wrong outweighs the measurable gain of improving operations. Flevy's research identifies it as the single most predictive bias in enterprise technology adoption decisions.

Does behavioral science training help with technology decisions?

Yes, but not the way most people expect. The value isn't in knowing the names of biases — it's in recognizing them in real time. The Behavioural Insights Team found that business leaders often nod at bias concepts but don't follow through. Applied behavioral science is a practice, not a lecture. It works when it's embedded in decision processes, not PowerPoint decks.

Work With Us

Need help building this into your operations?

Kief Studio builds, protects, automates, and supports full-stack systems for businesses up to $50M ARR.

Newsletter

New writing, straight to your inbox.

Strategy, psychology, AI adoption, and the patterns that actually compound. No spam, easy to leave.

Subscribe