Business Problem First, Technology Second

alistair-hancock · 20 March 2026

Signal and Noise: Volume One

This is the first in an occasional series called Signal and Noise — essays on the principles that have guided 37 years of building operational systems, and why they matter more in the age of AI.


The maxim

Business problem first, technology second.

It sounds obvious. So obvious that most people nod, agree, and then immediately do the opposite.

I have been saying this since the early 1990s, when the technology in question was Borland Delphi, the problem was usually a membership database, and the clients were charities and trade associations who needed their systems to actually work. The principle was simple: do not start with the tool. Start with the problem. Understand what the organisation needs to achieve, how it currently operates, where the real friction lives — and only then decide what technology, if any, to apply.

Thirty-seven years later, the tools have changed beyond recognition. The principle has not changed at all. If anything, it has become more urgent — because the tools have become so impressive that the temptation to start with them has never been greater.

What it meant in the 1990s

In the 1990s, "business problem first" was a defence against vendor lock-in and shiny-object syndrome. Technology companies sold platforms. They arrived with slide decks showing what their platform could do. The pitch was always the same: here is our product, here is how it works, now tell us where it fits in your business.

The question was backwards. It started with the solution and went looking for a problem.

The organisations that got this right — and the ones that became Rubicon's longest-standing clients — did the opposite. ACAS did not say "we want a Delphi application." They said "we need a national case management system that 1,000 users can rely on." The technology selection came after the requirements were understood, not before. That system was not replaced for over twenty years.

BAE Systems did not ask for a website. They needed a way to get 10,000 employees access to the right information. What started as "build us a website" became an enterprise-wide migration from Lotus Notes to Oracle — because the real problem was not about web pages, it was about information architecture.

ICI Paints needed to manage product specifications across 70 countries in 20 languages. The technology we built — DFinity — was a content management platform. But the business problem was global brand consistency and local compliance. The platform won awards. It served ICI for years. When they eventually replaced it, rebuilding just the Dulux sites reportedly cost over 18 million pounds. The system lasted because it was built around the business problem, not around a technology fashion.

What it means now with AI

The current wave of AI adoption has produced the most acute case of technology-first thinking in my career.

The conversation in boardrooms across the country follows a predictable pattern. The board asks: "What are we doing about AI?" The technology team responds with a proof of concept. The proof of concept looks impressive in a demo. A budget gets allocated. A project gets launched. Six months later, the project is either abandoned, scaled back, or running without measurable impact — and the board asks the question again.

This pattern is not anecdotal. It is the statistical norm.

BCG published research in 2024 finding that 74% of companies struggle to achieve and scale value from their AI initiatives. Not 74% of small companies. Not 74% of companies without technical talent. 74% across the board. The PwC 2026 Global CEO Survey — 4,454 CEOs across 95 countries — found that 56% report getting nothing from their AI investments. Harvard Business Review reported in March 2026 that 71% of CIOs say AI budgets will be frozen or cut within two years if value cannot be demonstrated.

The numbers are stark. But the cause is not complicated. These projects fail because they start with the technology. "We need an AI strategy" is not a business problem. "Our compliance reviews take three days and they should take three hours" is a business problem. "Customer churn increased 15% and we do not understand why" is a business problem. "We have 200 inbound leads a day and no way to prioritise them" is a business problem.

AI may be the right solution to some of these problems. It may not be. A rules engine might be better. A better process might be better. Sometimes the answer is to stop doing the thing entirely. But you cannot know which solution is right until you have properly understood the problem — and "properly understood" does not mean a one-hour workshop with sticky notes. It means the kind of diagnostic work that most organisations, in their rush to "do something with AI," skip entirely.

Don't do the wrong thing faster

There is a companion maxim that goes with "business problem first." It is this: don't do the wrong thing faster.

Most AI pitches are about speed. Process documents faster. Generate content faster. Analyse data faster. Respond to customers faster. Speed is the headline. Speed is what gets budget approval. Speed is what the demos show.

But speed applied to the wrong process is not progress. It is waste with better tooling.

If your compliance process takes three days because it involves unnecessary steps, outdated criteria, and manual handoffs between people who do not need to be involved, then automating that process with AI gives you a three-hour version of a broken process. You have not solved the problem. You have accelerated it.

Einstein — or at least the quote commonly attributed to him — put it precisely: "If I had an hour to solve a problem, I'd spend 55 minutes thinking about the problem and 5 minutes thinking about solutions." The ratio is deliberately extreme. The point is that the quality of your solution is determined entirely by the quality of your problem definition.

This is the step that most AI projects skip. The diagnostic discipline — the 55 minutes — is where the value actually lives. It is slower. It is less exciting. It does not produce a demo for the board meeting. But it is the difference between the 26% of organisations that get value from AI and the 74% that do not.

How this plays out at Rubicon

When a client comes to us and says "we need AI," the first thing we do is resist the temptation to agree.

We start with the Discovery Session — a free, two-hour working session where we do not discuss technology at all. We discuss the business. What are the actual pain points? Where does time get wasted? Where do decisions get stuck? What information is missing? What information is available but buried? Which processes exist because they have always existed, and which exist because they need to?

This is not a novel approach. It is the approach that has kept clients with Rubicon for over twenty years. Norton Finance signed in 2005. Market Harborough Building Society signed around the same time. They have stayed not because we built them exciting technology but because we understood their businesses well enough to build technology that genuinely served their operations.

The AI Clarity Score assessment that we use with new clients is built on the same diagnostic philosophy. It measures four dimensions — signal versus noise, decision confidence, execution discipline, and AI governance — because those are the dimensions that determine whether AI creates value or creates risk. The score does not ask "how much AI are you using?" It asks "how clearly can you see?" Because an organisation that cannot see clearly should not be automating anything.

The most expensive AI mistakes

The most expensive AI mistakes are not hallucinations. They are not obvious errors. They are correct applications of AI to the wrong problem.

When an organisation automates a broken process, it does not just fail to capture value — it makes the process harder to fix. The automation becomes a dependency. People build workflows around it. The broken assumptions are now encoded in software rather than in a spreadsheet, and encoded assumptions are much harder to question than visible ones.

When an organisation deploys AI without governance — without clear policies on what data goes where, without human oversight on critical outputs, without audit trails for AI-informed decisions — it is not just accepting risk. It is accepting risk that it cannot see, cannot measure, and cannot trace when something goes wrong.

This is why "business problem first" is not just an aphorism. It is an operational discipline. It means:

Before you deploy AI, define the problem in terms that do not mention technology. If you cannot describe the problem without using the words "AI," "machine learning," or "automation," you do not understand the problem yet.

Before you automate a process, map the process as it actually operates — not as the documentation says it should. Most processes have drifted from their documented form. Automating the documentation rather than the reality creates a system that nobody uses.

Before you measure success, define what success looks like in business terms. "We deployed AI" is not success. "Compliance reviews completed in three hours instead of three days with no increase in error rate" is success.

Before you scale, verify. One working pilot with measured results is worth more than ten proofs of concept with impressive demos. The gap between "it works in a demo" and "it works in production at nine o'clock on a Monday morning" is where most AI projects die.

The principle underneath

There is a deeper principle beneath "business problem first" that connects it to everything else I believe about technology.

Effectiveness and efficiency are not the same thing. Effectiveness is doing the right things. Efficiency is doing things right. The sequence matters enormously: be effective before you focus on efficiency. Get the direction right before you optimise the speed.

AI is the most powerful efficiency tool the world has ever seen. It can process, generate, analyse, and execute faster than any previous technology. But efficiency without effectiveness is just doing the wrong thing faster. And doing the wrong thing faster, with confidence, at scale, is how organisations create problems that take years to undo.

The organisations that are getting value from AI — the 26% in the BCG data, the ones delivering $3.7 returns per dollar in the Microsoft research — are the ones that start with the problem. They understand their operations. They know what matters and what does not. They deploy AI where it genuinely serves the business, govern it properly, and measure the results honestly.

They spend their 55 minutes on the problem. And the five minutes on the solution turns out to be more than enough.


Alistair Hancock is the founder and CEO of Rubicon Software. "Signal and Noise" is an occasional series exploring the principles behind effective technology adoption — from 37 years of building systems that work to the AI era that is testing all of them.

← Back to Insights