Don't Do the Wrong Thing Faster

24 March 2026

Signal and Noise: Volume Four

This is the fourth in an occasional series called Signal and Noise - essays on the principles that have guided 46 years of building software, and why they matter more in the age of AI.


The maxim

Don't do the wrong thing faster.

Peter Drucker said it better, or at least more formally: "There is nothing so useless as doing efficiently that which should not be done at all." I have been saying my version for years because it is shorter and harder to ignore. The point is the same.

Effectiveness first. Efficiency second. Always in that order.

This has been a minority position for most of my career. It is now the most important idea in AI adoption - and almost nobody is talking about it.

The efficiency trap

Every AI pitch I have seen in the last two years follows the same structure. Here is your current process. Here is how long it takes. Here is our AI tool. Here is how much faster it will be. The metric is always speed. The assumption is always that the process being accelerated is the right process.

That assumption is almost never examined.

I sat in a meeting last year where a company wanted to use AI to accelerate their proposal review process. They received roughly 500 vendor proposals per quarter. Each one went through a seven-stage review involving four departments and took an average of 23 working days. They wanted AI to cut that to five days.

Reasonable-sounding request. Except when we looked at the actual data, 80% of those proposals were from vendors who had already been rejected at least once. The process did not have a speed problem. It had a filtering problem. The first question was not "how do we review faster?" It was "why are we reviewing proposals from vendors we have already said no to?"

The AI tool they were evaluating would have accelerated a broken process. It would have produced faster rejections of proposals that should never have entered the pipeline. The company would have paid for an AI licence, integration work, and change management - all to do the wrong thing faster.

This is not an isolated case. This is the pattern.

The pattern

Here is what I see repeatedly across organisations of every size:

"We need AI to process invoices faster." Do you? Or do you need to stop receiving 30% duplicate invoices because your procurement system does not deduplicate at the point of order?

"We need AI to summarise meeting notes." Do you? Or do you need fewer meetings, shorter meetings, or meetings with actual agendas that produce decisions instead of discussions that trail off into nothing?

"We need AI to write reports faster." Do you? Or do you need to stop writing reports that nobody reads? When was the last time someone made a decision based on the monthly operations report? If the answer is "I don't know," the problem is not report speed.

"We need AI to triage customer support tickets faster." Do you? Or do you need to fix the three product issues that generate 60% of your support tickets?

In every case, the impulse is the same: take the existing process, apply AI, make it faster. The question nobody asks is whether the process should exist in its current form at all.

Speed without direction

Speed is not a virtue. Speed in the right direction is a virtue. Speed in the wrong direction is a catastrophe - and the faster you go, the worse the catastrophe gets.

This is not abstract philosophy. It is operational reality.

Consider a company that automates its lead qualification process using AI. The AI scores leads based on historical patterns. Leads that match the profile of previous conversions get prioritised. Sounds sensible.

But what if the historical conversion data is biased toward a market segment that is shrinking? What if the company's future growth depends on entering a new segment where they have almost no historical data? The AI will systematically deprioritise exactly the leads the company needs to pursue. It will do this very fast. It will do this with great confidence. And the sales team will look at the pipeline and say "AI is working - look how quickly we are qualifying leads."

They are doing the wrong thing faster. The efficiency metrics look fantastic. The strategic outcome is a disaster.

This is what Drucker understood sixty years ago. Efficiency and effectiveness are not the same thing. Efficiency is doing things right. Effectiveness is doing the right things. You need both, but the order matters absolutely. An effective process that is inefficient can be improved. An efficient process that is ineffective is just an expensive way to fail.

Why this is Rubicon's contrarian position

The AI industry does not want to have this conversation. Speed sells. "We will make your process 10x faster" is a pitch that opens wallets. "Let us first examine whether your process is worth doing at all" is a pitch that gets you shown the door.

But it is the right pitch. And it is the one Rubicon makes.

Every engagement we take on starts with the same question: what problem are you actually solving? Not what process are you trying to accelerate - what outcome are you trying to achieve? Because the process and the outcome are not the same thing, and the gap between them is where most AI investment goes to die.

This is not a popular position. It means telling potential clients things they do not want to hear. It means saying "you do not need AI for this, you need to redesign your process" when the client has already budgeted for an AI project and the board is expecting a demo next quarter. It means being the consultancy that slows things down before speeding them up.

But it works. The projects that start with "what are we trying to achieve?" consistently deliver more value than the projects that start with "how do we make this faster?" This is not opinion. It is decades of evidence, starting before I was old enough to drive.

The decision before the acceleration

This is why we built Rubicon Anchor.

Rubicon Anchor is a decision intelligence platform. Its purpose is to help organisations make better decisions - not faster decisions, not more decisions, but better ones. It provides structure, challenge, and memory to the decision-making process. It asks whether you have considered the alternatives, whether you understand the second-order effects, whether your confidence is calibrated to the evidence.

The connection to "don't do the wrong thing faster" is direct. Before you accelerate a process, Anchor helps you decide whether it is the right process to accelerate. Before you automate a workflow, Anchor helps you examine whether the workflow produces the outcome you actually need. Before you invest in speed, Anchor helps you invest in direction.

This is not popular with people who want to move fast. But it is essential for people who want to move right. And the organisations that move right - that take the time to decide before they accelerate - are the ones that are still standing when the hype cycle moves on and the bills come due.

The Drucker test

Here is a simple test for any AI initiative. I call it the Drucker test, though Drucker would probably have phrased it more elegantly.

Before you automate any process, ask three questions:

1. If this process did not exist, would we create it?

If the answer is no - if the process exists because it has always existed, because nobody has questioned it, because it was designed for a problem that no longer exists - then automating it is waste. Eliminate it. Or redesign it. But do not make it faster.

2. What outcome does this process serve?

Not what does it produce - what outcome does it serve? A report is not an outcome. A decision is an outcome. A resolved customer issue is an outcome. If you cannot draw a clear line from the process to a meaningful outcome, the process is overhead.

3. Is the process the bottleneck, or is the decision?

Often, the slow part is not the process. The slow part is the decision that follows the process. Producing a report in five minutes instead of five days does not help if the report sits in someone's inbox for two weeks before anyone acts on it. The bottleneck is human judgment, not machine speed.

If a proposed AI initiative fails any of these three questions, it is a candidate for "doing the wrong thing faster." Stop. Rethink. Decide what the right thing is. Then, and only then, figure out how to do it efficiently.

The right order

Effectiveness before efficiency. Direction before speed. Deciding right before moving fast.

This is not a fashionable position in a world that celebrates velocity above all else. But it is the position that produces results. The 74% of AI projects that fail to deliver value - and they are well documented - are not failing because the technology is inadequate. They are failing because the organisations are accelerating processes that should have been questioned, redesigned, or eliminated.

Don't do the wrong thing faster. It sounds like common sense. It is common sense. But it is the kind of common sense that gets drowned out by every vendor pitch, every board presentation, and every breathless article about AI transformation.

At Rubicon, we start with the problem. We decide what is worth doing. And only then do we make it fast.

The order matters.

← Back to Insights