I Want to Speak to a Human Who Can Make Decisions

Note: This one started as a complaint letter that I never sent. It was too philosophical for a complaint. By the time I had written it, I thought it had lessons for software development too.

Every time I call a call centre, I default to speaking with a human. But last week’s experience made me realise: I don’t want to speak to ANY human. I want to speak to a human who can make decisions.

An MOT That Revealed a Systemic Problem

Last week, I took my car for its annual service and MOT at my local Dealership branch. I’d reported two minor issues via their online check-in: stuck front sprinklers and a non-functional wheel rim position light that had passed MOT for years.

By midday, I received the call: “Your car failed the MOT due to faulty sprinklers and the nearside indication light. It’s £700 for both repairs. I need your verbal approval to continue working on the car.”

What is at play here? Anchor effect and time-preference.
Achor effect: By setting a price, my interlocutor at the dealership has set an anchor to the cost of the repairs. 
Time preference: Calling and requesting verbal confirmation it has added a time preference to the decision-making process. Several things can be playing on my side that would lead me to make the decision (I’m busy, I need my car tomorrow for XYZ, I have already paid for the MOT), when my best interest is to sit, evaluate and make the decision at my phase.

Since I realised the cognitive bias at play in the phone call, I asked the representative to:

  1. Send me an email.
  2. Expect a call back from me in 45 minutes.

The Conversation Dance

Before calling, I checked parts prices online: lights were £90-£150, sprinklers £2-£16 each. The £700 quote suddenly looked expensive.

I called back with two goals:

  1. Challenge the lights repair—they’re not a road hazard and have passed MOT previously at their garage
  2. Approve the sprinkler repair (quoted at £90), and have the car out with the MOT approved on the day.

The representative’s response: “I can’t make that decision. I can only follow what the mechanics say.”

I’ll call this “corporate accountability bias”—where humans are trained to defer decisions to “the system.”

The manager offered a callback. His position:

  • He couldn’t verify my claim about previous MOT passes (he claimed no records are kept and quoted GDPR for this – another deferral to “the system”).
  • I needed to decide soon, as the car was taking up garage space.
  • My options: accept the £700 quote today, or go to market for cheaper repairs and return for a discounted re-test at £22
What is at play here?
Framing effect, loss-aversion and time-preference. 
Framing: The manager words my choices as to emphasize convenience and urgency (“ready today!”) versus effort and delay (“go to the market, then return”). Even though the second option might be cheaper, the framing makes it feel less attractive due to the added hassle and uncertainty.
Loss-aversion: By choosing not to do the repairs, I will lose the cost of the MOT and have to pay £22 on my return. Which, in my case, considering the cost of the parts and the severity, the financial allure of the loss-aversion is small, as I can stand to gain £200-£500 from going to the market.
Time-preference: We are bound to prefer things solved now than in the future. In fact, I was ready to pay £90 to solve it now, even though at this time I know the cost of the sprinklers (and how to change them!).

Nonone is at fault: Systems Without Decision-Making

At this point it dawned on me: every person I spoke with was kind, polite, and capable of independent thinking. But none could make a rational decision outside their Standard Operating Procedure.

The first representative couldn’t question the mechanic’s assessment. The manager couldn’t check historical records or adjust the pricing. Each interaction defaulted against my best interest, pushing me toward the expensive, convenient option.

They were all doing their jobs perfectly! That’s precisely the problem.

The system is designed to remove human judgment from the equation. “I follow the mechanic’s indication.” “Charging £22 for re-evaluation is in DVLA guidelines.” “We don’t keep records to avoid GDPR breaches.” These aren’t decisions—they’re automated responses delivered by humans.

My identification of the dangerous Patterns

Bringing this to software engineering and Enterprise Architecture, this is the pattern I see repeatedly in organisations struggling with productivity and delivery:

SOPs designed to reduce risk end up reducing thinking. Teams follow processes without understanding their purpose. When edge cases arise—and they always do—nobody can incorporate the signals from the environment to make contextual decisions.

Accountability shifts from outcomes to compliance. Success becomes “I followed the procedure” rather than “I solved the customer’s problem.” When delivery fails, everyone points to the process they followed correctly.

Humans become expensive chatbots. If your team can only execute predefined responses to predefined scenarios, you’ve already automated them—you just haven’t installed the software yet.

This is why so many Agile transformations fail. Teams adopt Scrum ceremonies but can’t make decisions about their own work. Retrospectives identify problems, but nobody has the authority to change anything. Product Owners can’t prioritise because all decisions flow upward through approval chains. The guardrails designed to prevent bad decisions end up preventing all decisions.

What Decision-Making Actually Looks Like

In healthy organisations, humans can navigate nuance:

Context matters more than rules. A mechanic who understands that a cosmetic light fault that passed inspection last year probably isn’t a safety issue today. A team that can adjust their Definition of Done based on release criticality rather than following identical checklists for every story.

Authority lives where information lives. The person closest to the problem should have the authority to solve the problem (maybe within clear constraints). My mechanic could have said: “These lights aren’t safety-critical and passed before. I’ll pass them, but you should fix them soon.” A developer can decide whether to refactor now or create technical debt with a clear understanding of the consequences.

Accountability is about outcomes, not compliance. The Dealership system optimizes for “we followed procedure” rather than “we served the customer well.” High-performing teams optimise for delivering value, not for perfect adherence to process.

The AI Wake-Up Call

As AI capabilities advance, we obsess over whether machines will replace human jobs. This experience showed me that most reports are on AI and jobs are asking the wrong question.

We’re already replacing human decision-making with algorithmic processes. We just call them SOPs and hide them in training manuals and approval workflows. The humans executing these processes are already expensive APIs—they intake requests, execute predefined logic, and return responses.

The organizations that will thrive with AI aren’t the ones with the best automation. They’re the ones that understand which decisions require human judgment and empower people to exercise it.

If your current “humans” can’t make decisions, replacing them with AI will just automate a broken system faster. 

How the Story Ended

I took my car to a friendly neighbourhood mechanic who advised which parts to buy online and had a reasonable charge for the repair. Then I paid £50 for a new MOT elsewhere—financially irrational given the Dealeership’s £22 re-test offer, but by that point, they didn’t deserve my business.

The £400 I saved wasn’t the win! The win was finding a human who could make a decision.

The most generous interpretation
I choose to believe that the Dealership’s SOPs resulted from successive decisions by well-intentioned system designers trying to improve the organization’s bottom line. We all need to make ends meet. Systems optimized for efficiency often create unintended consequences.
But there’s an alternative interpretation that troubles me: what if someone designed these interactions while fully aware of cognitive biases—anchor effects, time preference, loss aversion, framing—and deliberately structured them to benefit the business at the customer’s expense?
I don’t know which is true. But I do know this: the impact is the same either way. Whether your SOPs inadvertently exploit customers or your team inadvertently blocks innovation, the result is a system working against the interests of the people it should serve.

When I work with CTOs and development teams, this is often the invisible constraint killing productivity: layers of approval, rigid processes, and decision-making authority that lives everywhere except where the work happens. Your team knows what needs to change. They can see the inefficiencies. They just can’t do anything about it because the system says no.

Building high-performing teams isn’t about better methodologies or more sophisticated tools. It’s about understanding where decisions should live and trusting humans to make them. That’s where evidence-based coaching makes the difference—identifying which guardrails protect you and which ones are just slowing you down.

If your teams are following processes perfectly but still struggling to deliver, the problem isn’t the people. It’s the system that’s removed their ability to think. Let’s explore how to put decision-making back where it belongs. Reach out today! and let’s build organizations where humans can actually make decisions.


Discover more from The Software Coach

Subscribe to get the latest posts sent to your email.

Leave a Comment

Your email address will not be published. Required fields are marked *