The 20% pricing test every service business should run this quarter
A low-risk, high-leverage pricing experiment: raise prices by 20% and watch revenue. Who leaves, who stays, and what it tells you about your positioning.
Why most service businesses are underpriced
Most service businesses in the UK are somewhere between 20% and 60% below the price they could realistically command. This is almost never because the founder set the price carefully and got it wrong. It’s because the price got set at a moment of low confidence — often years ago — and never got revisited.
The problem with unrevisited prices is that they compound in the wrong direction. The costs of delivery rise with inflation. The quality of your work usually rises with experience. But the headline number on your proposal sits frozen, and every year the margin gets thinner while the service gets better. This is how businesses end up busy and unprofitable at the same time.
The answer isn’t a careful spreadsheet exercise. It’s an experiment.
The 20% test
Raise your prices by 20% on the next five proposals you send out.
Not 5%. Not 10%. Twenty.
Then watch what happens. The result falls into one of three shapes, and each shape tells you something actionable.
Shape 1: Most clients accept, revenue rises proportionally
You were underpriced. The market was already willing to pay more; you were leaving money on the floor. Roll the new pricing out to all new enquiries from this point and book the increase as margin. Most service businesses sit here and don’t realise it.
Shape 2: A few clients push back, some drop out, revenue roughly holds
You found the upper edge of your current positioning. The clients who left were the ones most price-sensitive — the ones who always took longest to pay, scope-crept the most, and treated your time as interchangeable with any cheaper alternative. Their departure is a gift.
Shape 3: Almost everyone drops out, revenue collapses
You raised past what your current positioning can support. Not a failure — just data. Either walk pricing back (no one is tracking you) or invest in the positioning work that would make the new price defensible (case studies, specialisation, a tightened offer). The 20% test is reversible; treat this outcome as market research, not catastrophe.
Most founders assume they’ll land in shape 3. Almost none do. The actual distribution is roughly 50% shape 1, 40% shape 2, 10% shape 3. Which means the expected value of running this experiment is strongly positive.

Why 20% is the right increment
The size of the increase matters more than most founders realise.
Below 10%, the increase is too small to learn from. Clients notice, but the signal you’d need to extract — whether your positioning supports a meaningfully higher price — gets lost in normal quote variance. You can run it and conclude nothing.
Above 30%, the increase is large enough that you’re no longer testing your current positioning — you’re testing a new positioning, and the data becomes uninterpretable. Of course people drop out if you jump 50%. That doesn’t mean 20% wouldn’t have stuck.
Twenty percent is the sweet spot: big enough to force the market to respond, small enough that the responses are readable.
The uncomfortable conversation
There’s one difficult moment in running this test — the first time you quote the new price to a returning client or a warm referral who knows your old rate.
You have two options at this moment. One works. One doesn’t.
The option that doesn’t work: apologise. “We’ve had to raise our prices. I hope that’s okay. We could potentially do a one-off discount for you…” You just signalled that the new price isn’t real. Now you’re negotiating down from a number the client can tell you don’t believe in.
The option that works: state the price as the price. “Our rate for this engagement is £X.” Then stop talking. Let the silence exist. If they push back, you can say “I understand — that does reflect a change. Would you like me to walk through what it covers?” But you don’t retreat. You don’t pre-emptively discount.
Founders who fumble the 20% test almost always fumble it in this specific way — by signalling, in their own tone and body language, that the new price is somehow embarrassing. It isn’t. It’s the price. Act like it.

The clients who leave when you raise prices
Here’s what the data consistently shows about which clients leave when a service business increases prices by 20%.
The ones who leave are almost never your best clients. Your best clients — the ones who value the work, refer others, pay on time, and don’t scope-creep — almost never exit on a 20% rise. They’ve done the mental maths on what you produce and know it’s worth more than you charge.
The ones who leave are disproportionately the painful ones. The clients whose profitability, once you honestly count the hours, was near zero. The ones whose scope always seemed to expand. The ones who took four follow-ups to settle an invoice.
Which means the 20% test isn’t only a revenue experiment. It’s a client-portfolio cleanup. The departures it produces are usually net positive for the business, even when they feel painful in the moment.
What to measure
Four things, tracked against your last comparable period (last quarter, last six months — whichever gives you cleaner data).
- Close rate on new proposals at the new price. Don’t panic at a 10-15% drop — that’s expected and usually offset by the higher revenue per closed client.
- Average project value. Should rise in line with the price increase, possibly more if your positioning firms up and clients stop negotiating as hard.
- Time-to-pay. Often improves with the new price — buyers who’d have haggled tend to self-select out, leaving a cleaner cohort.
- Scope-creep events per project. Lower-price clients scope-creep more; higher-price clients scope-creep less. You should see this shift within a quarter.
When not to run the test
The 20% test isn’t right in every situation.
- If you’re in the middle of a fixed-term contract renewal cycle, wait until the next round. Raising mid-cycle breaks trust.
- If your brand is actively built around being the affordable option, a 20% jump contradicts your positioning and will confuse your market. In this case the test is different — it’s “am I sure the affordable positioning is still the right strategy?”
- If you haven’t delivered a visibly improved result in the last year, the price rise is harder to defend. Do the work to produce at least one strong case study first, then run the test.
Outside those three situations, most service businesses can run the 20% test within the next month.
Where this fits in a bigger pricing system
The 20% test is the primitive — the simplest way to surface information about your pricing power. The full system includes the anchoring structures, the tiered-offer scaffolding, the psychological pricing mechanics, and the value-stacking framework that lets you charge more without resistance. That system is the operational content of the Pricing Power Playbook.
But running the test doesn’t require the full system. It requires quoting five proposals at a 20% higher number, writing down the results, and acting on what you learn.
The mindset the test rewards
Founders who run this test consistently end up with higher margins and better clients over time. Not because they have some secret — because they treat pricing as an ongoing experiment, not a one-time decision made years ago.
The willingness to find out what your market will actually pay is worth more, compounded over a career, than almost any other single skill in running a service business.
Frequently asked questions
Should I notify existing clients before running the test? +
The test applies to new proposals, not mid-contract clients. For active engagements, honour the rate you agreed. When it's time to renew or quote new work, quote the new price without special explanation. Treat the new rate as simply your current rate.
What if the first two proposals get rejected? +
Don't panic-adjust after two data points. Pricing experiments need at least five proposals to produce a readable result — below that, you're reading noise. Run the full five, then look at the pattern.
Can this work for fixed-fee SaaS or productised services? +
Yes — but with modifications. For SaaS, test on new signups only (don't reprice existing subscribers mid-term). For productised services, A/B test the landing page showing the new price against the old price for a fortnight. The principle — meaningful increment, measured result — is the same.
What if I'm genuinely afraid to charge 20% more? +
That fear is diagnostic. It usually means your positioning hasn't caught up with the quality of work you're doing. The cure is case studies: write up three recent wins in enough detail that a prospect can see the specific value. Once you can point to evidence, the price feels defensible — to you and to the client.
How often should I run this test? +
Once a quarter for the first year, then annually once the new baseline settles. Pricing drift is real — inflation, positioning shifts, competitive movement all push on what you can charge. Treating a price rise as a once-in-five-years event is how businesses end up chronically under-priced.
Go deeper
The Pricing Power Playbook
Build a pricing strategy that maximises revenue, signals premium positioning, and gives you the confidence to charge what your work is actually worth.
See the full course →