Symmetrical dark hallway with pink-lit columns and a glowing angular form — Amelia S. Gagne, Kief Studio
Ai Getting Started • Updated • 6 min read

What I Tell Every CEO Who Asks Me About AI

After three years of the same conversation with executives at every stage, the advice has stabilized. Here's the version I'd give if you bought me a coffee.

Every CEO asks the same question eventually. The phrasing varies — "should we be doing something with AI," "are we falling behind," "my board keeps asking about our AI strategy" — but the underlying anxiety is identical: I don't know what I don't know, and I can't tell who's selling me snake oil.

After three years of this same conversation across industries (fintech, healthcare, cannabis, legal, e-commerce, education), the advice has stabilized. Here's the version I'd give if we were sitting across a table.

Start with what's already breaking

Forget the AI roadmap for a minute. Pull up the list of things your team complains about most. The tasks that are manual, repetitive, error-prone, and that nobody wants to own. Data entry that gets done at 4:55 PM on Friday and is wrong by Monday. Reports that take three people and a spreadsheet to produce. Customer inquiries that get the same answer 80% of the time.

That's where AI goes first. Not because it's glamorous — because it's low-risk and high-evidence. If you automate a task that was taking 12 hours a week and the automation handles it in 20 minutes with fewer errors, nobody in the building questions whether "AI works." You just showed them.

McKinsey's 2025 State of AI report found that companies focusing AI adoption on operational efficiency saw 3x higher returns than those chasing customer-facing AI products in their first year. The unsexy work pays better.

Bioluminescent deep sea organisms — light and insight in the deepest unknown spaces
The most valuable discoveries happen where nobody else is looking. Depth beats surface-level coverage.

Don't buy a platform. Solve a problem.

The fastest way to waste money on AI is to buy an enterprise AI platform and then look for problems to solve with it. That's a $200,000/year answer searching for a question.

Instead: identify the problem, measure it, and then find the smallest tool that solves it. Sometimes that's a $20/month API. Sometimes it's a fine-tuned model running on your own infrastructure. Sometimes it's a Python script that calls Claude and writes the output to a spreadsheet.

The Gartner AI hype cycle exists because vendors need you to think AI requires their platform. It doesn't. The companies seeing real ROI are the ones treating AI as a tool — like a database or a CI/CD pipeline — not as a transformation initiative with its own VP and quarterly board presentation.

Smoke trails forming double helix spiral — information encoded in organic spiraling form
Every business has a DNA — the patterns, values, and decisions that replicate across every engagement.

Your data is not ready (and that's normal)

Nearly every AI conversation I have eventually hits the same wall: the company's data isn't in a state where AI can use it reliably.

Records are scattered across four SaaS tools that don't share a schema. Customer data has duplicates, inconsistent formatting, and gaps from three CRM migrations. Financial data lives in spreadsheets that reference other spreadsheets that reference a retired database.

This isn't failure — it's the normal state of a company that's been operating and growing for years without a dedicated data engineering practice. But it means the first AI project isn't usually an AI project at all. It's a data governance project with an AI milestone at the end.

The World Economic Forum's January 2026 report on AI in mid-market companies confirmed this: growing SMBs with integrated tech stacks are twice as likely to see positive AI outcomes (66%) compared to companies with fragmented systems (32%). The data infrastructure matters more than the model.

Macro photography of camera lens element with light refracting through precision optics — seeing AI clearly through the right lens
McKinsey's 2025 State of AI report: companies focusing AI adoption on operational efficiency saw 3x higher returns than those chasing customer-facing AI products.

Hire for judgment, not for prompting

Your team doesn't need "prompt engineering training." They need someone who understands what AI is actually doing when it generates an answer — and more importantly, when it's confidently wrong.

AI systems hallucinate. They produce plausible-sounding output that is factually incorrect, internally inconsistent, or missing critical context. The difference between an AI deployment that saves your company money and one that creates a compliance incident is whether the human reviewing the output knows enough about the domain to catch the errors.

For regulated industries — fintech, healthcare, legal — this isn't theoretical. An AI-generated compliance report that misclassifies a transaction type could trigger a regulatory examination. A customer-facing chatbot that invents a policy could create contractual liability. The value of AI in these contexts isn't eliminating human judgment — it's amplifying it by handling the volume, so humans can focus on the decisions that actually require expertise.

Lotus flower emerging from dark still water — clarity rising from depth
The best insights come from going deeper, not wider. Depth of understanding beats breadth of awareness.

Measure the right thing

Most AI initiatives track the wrong metric. They measure "adoption" — how many people are using the tool, how many queries per day, how many departments "have AI." Adoption is a vanity metric. It tells you people are clicking a button. It doesn't tell you anything is getting better.

Measure the work output. If you deployed AI to accelerate invoice processing, measure invoice processing time and error rate — before and after. If you deployed AI to generate first drafts of client communications, measure time-to-send and client satisfaction scores. If you deployed AI to assist code review, measure defect escape rate.

The companies that get real value from AI are the ones that defined what "better" means before they bought anything.

The question behind the question

When a CEO asks "should we be doing something with AI?" — what they're really asking is "am I making a mistake by not moving faster?"

The honest answer: maybe. But the bigger mistake is moving fast without direction. An AI project that automates the wrong process, uses unclean data, and gets deployed without human oversight doesn't just fail — it erodes trust in the entire initiative, making the next attempt harder to fund and staff.

Move deliberately. Start with the unsexy operational work. Fix your data. Hire people who can spot when the machine is wrong. Measure outcomes, not activity. And if someone pitches you an "enterprise AI platform" before asking what problem you're solving — keep walking.


Related reading

Frequently asked questions about ai adoption for ceos

What's the first AI project a mid-market company should try?

Internal operational automation — not customer-facing. Pick a process that's manual, repetitive, and measurable. Data entry, report generation, document classification, or customer inquiry routing are common starting points. The goal is a quick, low-risk win that builds organizational confidence and produces measurable before-and-after data.

How much should a mid-market company budget for AI?

A practical first project can run $5,000–$50,000 depending on data readiness and integration complexity. The 40-30-20-10 framework from Mejuvante is reasonable for ongoing investment: 40% for integration and data work, 30% for software and infrastructure, 20% for training and change management, 10% for ongoing operations. Avoid committing six figures to a platform before you've validated a single use case.

Why do 90% of AI projects fail?

Most fail for reasons that have nothing to do with AI. Unclear business objectives, poor data quality, insufficient change management, and measuring adoption instead of outcomes account for the majority of failures. The technology works. The organizational readiness usually doesn't — which is why data governance and clear success metrics matter more than model selection.

Should we build AI in-house or hire a partner?

It depends on whether you have internal engineering capacity that understands both AI systems and your regulatory environment. Most mid-market companies don't. A partner who starts by understanding your compliance obligations before recommending an architecture is a better bet than one who leads with a tech stack and assumes compliance is your problem.

Is it too late to start with AI in 2026?

No. The hype peaked in 2023-2024, and the companies that jumped in without strategy are the ones now dealing with failed implementations and executive distrust. Starting now, with clear objectives and clean data, puts you ahead of the organizations that have to undo their first attempt before they can try again.

Work With Us

Need help building this into your operations?

Kief Studio builds, protects, automates, and supports full-stack systems for businesses up to $50M ARR.

Newsletter

New writing, straight to your inbox.

Strategy, psychology, AI adoption, and the patterns that actually compound. No spam, easy to leave.

Subscribe