Decision Scenarios

Three scenarios from the build-vs-buy decision space - the class of decision where cognitive bias tends to be most costly and least visible. Each one shows how Rubicon Probity surfaces the biases at play, the verdicts that follow, and the challenge questions that change the conversation.

3

scenarios walked through

3

verdicts: CLEAR / CAUTION / STOP

153

biases in the library behind

Three scenarios from the build-vs-buy decision space - the class of decision where cognitive bias tends to be most costly and least visible. In each case, the decision looks clear at the outset. The biases are working quietly underneath.


Scenario 1: The 40% price increase

The situation. A professional services firm has run its operations on a SaaS CRM for four years. The vendor announces a 40% price increase at renewal, citing infrastructure investment and new feature development. The firm uses roughly 60% of the platform's features. The IT director has begun a quiet assessment of alternatives.

The decision being made. Should the firm absorb the increase and renew, negotiate a smaller increase, switch to a cheaper SaaS platform, or commission a bespoke system?

What Rubicon Probity surfaces at the Decide stage:

  • CAUTION: Anchoring (B001). The 40% increase is dominating the conversation. The team is negotiating downward from the new price rather than evaluating whether the current platform represents value at any price. The first number in the room is shaping all subsequent judgement.
  • CAUTION: Sunk cost fallacy (B090). Four years of process build-out around this platform is being cited as a reason to stay. The cost of that integration is a past cost - it should not weight the forward decision.
  • CAUTION: Status quo bias (B091). The team is treating the current platform as the default and alternatives as requiring justification. A structurally neutral comparison would reverse the framing: each option should justify itself on its own merits.
  • CAUTION: Loss aversion (B092). The disruption of switching is being weighted more heavily than the quantified opportunity of a better-fit system. The losses are vivid; the gains are abstract.

The challenge questions Probity raises:

  • If you had never used this platform and were evaluating it fresh today, would you select it over the available alternatives at the renewed price?
  • What is the total cost of remaining - including the renewal price, the ongoing opportunity cost of unmet capability, and the continuing integration debt - compared against the total cost of switching?
  • Who in this decision has a stake in the status quo outcome, and have they been balanced by someone with a stake in change?

Scenario 2: The build that will not finish

The situation. An operations team commissioned a bespoke replacement for its procurement SaaS platform eighteen months ago. The original estimate was nine months and £180,000. The programme is now at £290,000 spent, fifteen months elapsed, and approximately 65% of the agreed scope delivered. The current estimate to completion is "another three to four months."

The decision being made. Should the firm continue investing to completion, commission an independent technical review before releasing further budget, or establish criteria under which the programme would be stopped and the firm would return to the original SaaS?

What Rubicon Probity surfaces at the Execute-stage review:

  • STOP: Escalation of commitment (B093). The case for continuation is being made by the same people who approved the original budget and scope. The programme has absorbed 161% of its original budget with no structured reforecast. The bias pattern - increasing commitment to a course of action precisely because of prior investment - is the most reliable predictor of avoidable value destruction in technology programmes.
  • CAUTION: Planning fallacy (B049). The original nine-month estimate was developed without reference to comparable programmes. The revised estimate of "three to four months" has been produced by the same team, using the same methods, under higher pressure. There is no reason to believe the calibration has improved.
  • CAUTION: Overconfidence (B106). The team's confidence in the revised estimate is not supported by the track record. A 67% overrun on timeline and a 61% overrun on budget represent significant calibration failure. Confidence intervals have not been shared; point estimates are being presented as reliable.

The challenge questions Probity raises:

  • Were exit criteria defined before this programme started? If so, have they been applied to the current state? If not, why not?
  • Would an independent reviewer, seeing the current delivery record for the first time, recommend continuation at current confidence levels?
  • What is the expected value of completing versus stopping, calculated using the actual delivery rate observed so far rather than the team's estimate?

Scenario 3: The platform choice

The situation. A logistics company has run its sales and customer operations on Salesforce for six years. The contract is up for renewal. The commercial team argues Salesforce is essential to scale. The IT director believes the firm is paying for capabilities it does not use and that a purpose-built system would serve the operation better. The CFO wants a decision within 60 days.

The decision being made. The firm should choose one of three paths: renew Salesforce at the current configuration and price, migrate to a mid-market SaaS platform at roughly 40% of the current cost, or commission a bespoke system designed around the firm's specific operational model.

What Rubicon Probity surfaces at the Decide stage:

  • CAUTION: Survivorship bias (B055). The commercial team's case for Salesforce references several high-growth firms that scaled on the platform. These are the visible successes. Firms that scaled successfully on cheaper or bespoke systems - and firms that struggled with Salesforce's complexity and implementation cost - are not in the sample.
  • CAUTION: Anchoring (B001). Salesforce's pricing is functioning as the anchor for what "enterprise CRM" costs. The mid-market alternative looks inexpensive by comparison. The bespoke option looks expensive. Neither comparison is derived from first principles: what does this firm's operation actually require, and what should that cost?
  • CAUTION: Opportunity cost neglect (B072). The visible cost is the build price of the bespoke option. The ongoing cost of remaining on a platform that fits 60% of the operation's needs - in licence fees, workarounds, and constrained capability - is not being quantified.
  • CAUTION: Groupthink (B108). The evaluation is being led by a cross-functional team. The commercial team's strong advocacy for Salesforce has not been structurally challenged. No one in the room has a formal brief to argue the case against renewal.

The challenge questions Probity raises:

  • What capability does this firm actually require from its CRM, and which of the three options delivers the closest match - independent of price and brand?
  • Has the total five-year cost of ownership been modelled for all three options, including implementation, training, ongoing licence or maintenance, and the cost of workarounds for unmet capability?
  • Is anyone in this decision making the case for each option with equal rigour, or is one option being evaluated against the others?