Steve Pritchard on Dispatch Math and Hidden Capacity
A practical breakdown of Steve Pritchard's viral post on AI dispatch, trapped capacity, and system fixes that beat hiring more techs.
Steve Pritchard recently shared something that caught my attention: "Stop blaming your field techs for slow growth. It's not a people problem. It's a system problem." That framing is refreshing because it challenges the default reaction in field service: when you hit a ceiling, you assume you need more trucks, more technicians, and more dispatchers.
Steve goes further with a real example from a Dallas plumbing company owner who realized, "I thought we needed more trucks. More techs. Turns out, we just needed them in the right spots." That one sentence captures a hard truth many operators do not want to hear: the bottleneck is often not labor effort, it is coordination.
In this post, I want to expand on Steve's point and make it actionable. Because if you run any field service operation (plumbing, HVAC, electrical, appliance repair, pest, garage doors), you have probably felt the same pressure: the phones are ringing, the calendar is full, your team is sprinting, and growth still feels slow.
The myth: slow growth means you need more headcount
Steve described the "classic advice" the owner heard: hire technician #23 and #24, spend heavily on payroll and trucks, and expand dispatch. That advice is not always wrong, but it is often premature.
Adding capacity by hiring is expensive and slow:
- Recruiting and onboarding take time
- New techs create training load and quality risk
- More trucks and tools mean more capital, maintenance, and scheduling complexity
- Dispatch load increases, which can degrade decisions even further
In other words, you may buy more capacity on paper while losing efficiency in practice.
The reality: dispatch is an optimization problem under pressure
Steve's story highlights why blaming techs is so common. Dispatchers and managers see late arrivals, longer drive times, missed time windows, and uneven workloads. From the outside, it looks like execution problems.
But dispatch is really a live optimization problem with constraints that change minute by minute:
- Real time technician location
- Traffic patterns and time of day
- Skill match and licensing requirements
- Parts availability and job duration uncertainty
- Priority and SLA commitments
- Customer preferences and access constraints
Humans can do this well at small scale. But as Steve put it, with 22 technicians and dozens of jobs, dispatchers are "winging it" because the combinatorics overwhelm any manual approach.
"Nobody's lazy. Nobody's dumb. The math just beats them every time."
That line matters. It reframes dispatch errors as predictable outcomes of cognitive overload, not as personal failure.
What changed: a narrow AI agent with one job
The most interesting part of Steve's post is not "AI" as a buzzword. It is the scope discipline: the company built an AI dispatch agent with a single responsibility: generate the optimal assignment using location, traffic, skills, and job details.
Notice what they did not do:
- They did not rebuild the entire business
- They did not replace dispatchers
- They did not attempt full automation without oversight
Instead, they created a decision support layer that compresses a complex calculation into a recommendation a dispatcher can approve or override.
Why that matters operationally
In many companies, dispatch becomes a throughput constraint. If each assignment takes 10 to 12 minutes (calling techs, checking maps, searching notes, guessing job length), then scaling jobs per day becomes impossible without scaling dispatch labor.
Steve reported a dramatic shift: "Each job now takes five seconds to assign, not twelve minutes." Even if your numbers vary, the principle is consistent: reduce decision friction, and you unlock capacity you already pay for.
The results: capacity hiding in plain sight
Steve shared post-build outcomes eight weeks later:
- Each tech saved 28 minutes a day
- Daily jobs up 18%
- Fuel bills down $3,100 a month
- Bad assignments cut from 25% to 6%
Even if you discount these improvements, the direction is exactly what you would expect from better routing and skill matching. In field service, small time savings compound because the day is made of travel blocks and job blocks. If you reduce travel variance, you also reduce the number of days that collapse into overtime, missed appointments, and reschedules.
"That's like hiring four new techs. Without paying four new salaries, onboarding, or adding headaches."
The phrase "like hiring" is doing a lot of work here. It is not just about cost savings. It is about risk reduction. Hiring is a long-term commitment. Optimization is often a reversible, iterative improvement that can be measured week by week.
Where "trapped capacity" comes from (and how to spot it)
Steve argues most field service companies leave 10 to 20% of real capacity on the table. I agree, and it usually shows up in a few repeatable patterns:
1) Crossed routes and deadhead miles
Two techs pass each other on the highway because dispatch is assigning based on who answered fastest, who complained least, or who "usually" takes that job type.
2) Skill mismatch that forces slowdowns
A tech arrives but lacks a certification, a specialty tool, or the confidence for that specific equipment. The job takes longer, or you roll a second truck.
3) Overly conservative buffers
Dispatchers add extra padding to avoid being late. That sounds safe, but it reduces the number of daily slots and creates idle gaps that never get filled.
4) Decisions made with incomplete information
Steve called out the core issue: rushed decisions. If you do not have real time location, job duration history, and traffic-aware routing, you are guessing.
5) Dispatch throughput limits growth
If dispatch is the bottleneck, adding techs can actually make service worse because assignment quality drops as the queue grows.
A practical playbook before you hire tech #23
Steve's message is not "never hire." It is "squeeze the system before you spend on headcount." If you want a concrete way to act on that, here is a simple sequence that works even without building custom AI.
Step 1: Measure the hidden loss
Track, per tech per day:
- Drive time and miles
- On-site time
- Idle gaps between jobs
- Repeat visits and second truck rolls
- First time fix rate
You are looking for patterns, not perfection.
Step 2: Standardize job data
Better decisions require consistent inputs:
- Job type taxonomy (do not let everything become "service call")
- Required skills and parts per job type
- Target duration ranges based on history
- Priority rules (what truly overrides what)
Step 3: Upgrade routing logic
Even basic improvements help:
- Stop assigning by dispatcher gut feel alone
- Use traffic-aware ETAs
- Bias toward proximity plus skill match, not just proximity
- Prevent route crossing with simple rules (zones, time windows, clustering)
Step 4: Introduce decision support, not chaos
If you adopt AI, keep the scope tight like Steve described:
- One job: recommend the best assignment
- Human review: approve or override quickly
- Feedback loop: track overrides and outcomes
This approach protects service quality while improving speed.
The question Steve leaves us with
Steve ends with a challenge: "The question isn't 'Can we afford to optimize?' It's 'Can we afford not to?'" That is the right question because field service margins are often won or lost in travel time, schedule stability, and first time fix rate.
If 10 to 20% of your capacity is trapped in traffic, mediocre routing, and rushed dispatch decisions, then "growth" is not mainly a marketing problem or a hiring problem. It is an execution system problem. And system problems can be redesigned.
If you are feeling maxed out, take Steve's perspective seriously: before you add more moving parts, make the current machine smarter.
This blog post expands on a viral LinkedIn post by Steve Pritchard. View the original LinkedIn post →