Etavrian
keyboard_arrow_right Created with Sketch.
Blog
keyboard_arrow_right Created with Sketch.

When More Leads Quietly Kill Your B2B Revenue

12
min read
Feb 25, 2026
B2B sales funnel spilling many leads into few deals with MQL slider and worried manager

I hear the same argument in almost every sales performance review: do we push for more leads, or hold the line and only pass the best ones to sales? That question sits behind most debates about lowering the lead score threshold to send more MQLs.

In a B2B service business where a single deal can be worth six figures over a few years, the sales pipeline quantity vs quality trade-off is not theoretical. It shapes revenue, CAC, forecast reliability, and how sustainable the work feels for the team.

I’ll keep this practical and numeric: when is it actually safe to push volume, and when will “more MQLs” quietly drag down win rate and the forecast?

Sales Pipeline Quantity Vs Quality: Quantity Or Quality?

When I decide whether to prioritize more leads or better leads, I look at two inputs first: win rate (a proxy for quality and execution) and rep capacity (a proxy for whether the team can respond and run clean cycles).

Here’s the simplest way I frame it: if win rates are strong and reps have room, I can test more volume without immediately breaking the system. If win rates are falling, deals are stalling, or reps are overloaded, I tighten quality first, even if the pipeline coverage number looks comforting.

A basic 2×2 view makes the trade-offs visible:

Win rate high Win rate low
Capacity high Add more leads, keep score rules tight Improve quality first, then test volume
Capacity low Protect quality and routing Stop volume pushes, rebuild qualification

Pipeline coverage rules (like “3× coverage”) can be useful, but only when the coverage reflects real qualified demand. In many teams, once coverage is enforced without context, behavior changes: weak deals get added, ICP boundaries loosen, and CRM stages start carrying noise. The pipeline looks big, the forecast looks “safe,” and then the quarter closes with a miss that felt predictable in hindsight.

So yes, I want both quantity and quality, but I treat them as sequenced. I protect quality first, then tune volume to real capacity rather than a magic coverage ratio.

Lead Quality To Win Rate

The connection between lead quality and win rate is obvious in day-to-day work, but it helps to name the mechanics clearly.

When I say MQL, I mean a marketing qualified lead that has crossed a scoring threshold based on fit and behavior. An SQL is a lead sales has spoken with and accepted as worth pursuing. An opportunity is an SQL that has entered a structured sales cycle with a defined problem and agreed next steps.

When I evaluate “lead quality,” I’m usually looking at two components:

  • Fit: whether the account and contact align with ICP (company size, industry, role, use case, and any constraints that matter).
  • Intent: whether behavior signals buying motion (for example, a demo request or repeated high-intent page visits).

From there, I use a simple chain to keep myself honest:

Leads → SQL rate → Opportunity rate → Win rate → Average deal size

A small decline at each conversion step can overwhelm a big gain in top-of-funnel volume.

Standard S curve showing relationship between lead quality and win rate
As lead quality rises, win rate often follows a non-linear curve - small quality gains can produce outsized win-rate lift.

To show how quickly this stacks, here’s an illustrative model for the same B2B service team.

Baseline (higher quality): 200 MQLs in a quarter; 40% become SQLs; 60% of SQLs become opportunities; 30% win rate; $20,000 average deal size.
That yields: 200 × 0.40 × 0.60 × 0.30 × 20,000 = $288,000 in new bookings.

Now imagine I lower the lead score threshold. Volume rises, but conversion rates slide.

Lower threshold (more MQLs, weaker quality): 320 MQLs; 30% SQL rate; 50% opportunity rate; 22% win rate; same $20,000 average deal size.
That yields: 320 × 0.30 × 0.50 × 0.22 × 20,000 = $211,200.

That’s 60% more MQLs and less revenue, purely because quality eroded and every downstream rate moved against you.

I sometimes summarize this effect by lead tier to make it easier to discuss internally:

Lead tier SQL rate Opportunity rate Win rate CAC payback (months)
Tier 1: ICP + high intent 50% 70% 35% 10
Tier 2: ICP + mid intent 35% 55% 25% 14
Tier 3: loose fit or low intent 15% 30% 10% 24+

Once Tier 3 leads become a big share of what sales receives, the damage isn’t only their low conversion. They also absorb attention that should have gone to Tier 1 follow-up, which is where win rate is most sensitive to speed and focus.

Before I approve any volume push or scoring change, I ask a blunt question: if win rate drops a few points, do the extra MQLs still pay off? If the answer is “only if everything else stays perfect,” it’s not a real plan.

Sales Effort To Capacity

The spreadsheet view of volume often ignores the human bottleneck: reps have finite time for meetings, discovery, follow-up, and proposals.

In a B2B service motion, I typically assume an AE has a limited block of hours each week that can be spent in real selling conversations, and that each serious opportunity consumes meaningful time across discovery, internal prep, proposal work, and follow-ups. Under those constraints, “more leads” can quickly become more first calls that don’t convert, which is one of the fastest ways to slow the whole engine down.

When low-quality leads flood the funnel, I usually see the same pattern: calendars fill, response times slip, discovery becomes rushed, follow-up gets inconsistent, and sales cycles stretch. The most costly part is that high-intent leads - the ones that could have closed - wait longer for a thoughtful response.

So when I think about capacity, I don’t reduce it to “leads per rep.” I treat it as the hours required to move qualified opportunities from first contact to closed-won, without sacrificing speed on the deals that matter.

Lead Score Threshold

Should you lower the inbound lead score threshold to send more MQLs to AEs? Sometimes. But I only touch the threshold after I’m clear on what it represents today.

In most scoring models, a lead accumulates points based on fit (company size, industry, title), behavior (demo requests, pricing-page activity, webinar signups, content downloads), and sometimes source tags (paid, organic, partner, outbound). At a set score, the lead flips from “nurture” to “MQL” and routes to sales.

When you lower that threshold, a few things predictably happen: more leads qualify as MQLs, average intent declines, reps juggle more outreach and meetings, and MQL→SQL conversion often drops. If that continues, trust erodes - sales starts cherry-picking, and the marketing-to-sales handoff becomes political instead of operational.

Before I change the threshold, I want the basics to be true: routing rules are unambiguous, follow-up expectations are defined and realistically met, disqualification reasons are consistently logged, and there is a real nurture path for mid-score leads instead of a one-off touch. I also want marketing and sales looking at the same funnel definitions on a regular cadence. If you want a solid cross-functional lens on this, Insight Partners’ piece on funnel health is a useful companion read.

If those foundations are missing, lowering the lead score threshold tends to create noise rather than learning.

MQL Volume

There are real situations where sending more MQLs to sales is the right move. I’m most comfortable doing it when I’ve added sales capacity and current volume doesn’t fill it, when I’m entering a new segment and need data on how it converts, or when I’m validating a new channel that shows promise but hasn’t produced enough sample size to trust yet.

Even then, I use guardrails that protect response speed and signal quality. I’ll cap how much each rep receives in a week so follow-up stays fast, define a response-time SLA that the team can actually hit, and set a minimum acceptable MQL→SQL conversion rate. If conversion falls below that floor, I pause the volume push and inspect what changed.

One practical compromise that preserves win rate while still increasing learning is stream-splitting: I route high-score, high-intent leads directly to sales for immediate follow-up, while keeping mid-score leads in nurture until behavior shows stronger intent. That way, I don’t turn the AE calendar into a qualification filter for leads that were never ready.

If webinars are one of your “mid-intent” sources, it helps to define exactly what counts as sales-ready beyond registrations. (Related: Webinars That Create SQLs: A Blueprint Beyond Registrations.)

Example Scenarios

To ground the trade-offs, here are three simplified numeric cases for a B2B service company. These are illustrative numbers to show how the conversion math compounds.

Assumptions: average deal size is $25,000; sales cycle is ~60 days; three AEs; marketing cost per MQL is $150.

Scenario A: Keep Threshold, Protect Quality

If I run 300 MQLs per quarter with a 40% MQL→SQL rate, 60% SQL→opportunity rate, and a 30% win rate, the funnel produces 120 SQLs, 72 opportunities, and about 22 wins. At $25,000 per deal, that’s $550,000 in revenue.

Marketing spend is 300 × $150 = $45,000, which works out to roughly $2,045 in marketing spend per closed deal (just on that MQL cost).

Scenario B: Lower Threshold, Push Volume

If I lower the threshold and jump to 450 MQLs, but conversion softens to 30% MQL→SQL, 55% SQL→opportunity, and 24% win rate, I get 135 SQLs, about 74 opportunities, and about 18 wins. Revenue becomes $450,000.

Marketing spend rises to 450 × $150 = $67,500, or about $3,750 in marketing spend per closed deal. I spend more and book less because the conversion deterioration outweighs the volume gain.

Scenario C: Improve Scoring, Add Nurture

If I refine scoring and nurture instead of dropping the threshold aggressively, I might land at 360 MQLs with better conversion: 42% MQL→SQL, 62% SQL→opportunity, and a 30% win rate. That yields about 151 SQLs, 94 opportunities, and about 28 wins - roughly $700,000 in revenue.

Marketing spend is 360 × $150 = $54,000. Compared with the volume-first move, this scenario pays because the funnel stays healthy while volume grows.

This is why I treat “quantity vs quality” as a math problem, not a philosophy debate. A small slide in conversion rates can erase the upside of sending extra MQLs to sales.

Growth Acceleration Framework

To keep this from becoming a recurring argument, I treat it as a quarterly decision loop.

First, I diagnose the funnel with three questions: (1) are AEs and SDRs operating at healthy capacity (meetings, open opportunities, response times)? (2) where does conversion drop most (MQL→SQL, SQL→opportunity, or opportunity→closed-won)? (3) which sources deliver the best mix of SQL rate, win rate, and payback behavior?

Second, I pick one primary lever for the period. If capacity is available and quality is stable, I test more volume from the strongest sources. If win rate or conversion is weak, I focus on qualification, ICP alignment, and sales process clarity before I add more leads. If the source mix is skewed toward low-intent channels, I shift effort toward sources that historically produce better downstream performance.

Third, I sequence improvements so learning isn’t polluted. I want tracking and definitions to be reliable before I interpret outcomes; I want scoring rules tied to real conversion behavior rather than guesswork; I want routing and response expectations to protect high-intent leads; and I want nurture to do its job for good-fit prospects who simply aren’t ready yet.

S-curve of lead quality vs win rate with ladder markers
Not all scoring improvements create linear gains - you often “climb” from one win-rate band to the next.

Always Test Before “Going Live”

Any change to lead scoring thresholds or routing works best as a test, not a permanent flip of the switch.

When I design a test, I focus on three elements:

  • A true comparison: a control group that keeps the old rules and a test group that receives the new rules (by territory, segment, or rep assignment).
  • A defined window: usually a short early read (2-4 weeks) for leading indicators, followed by a longer view that spans at least one full sales cycle if early signals are positive.
  • Clear success metrics: MQL→SQL rate, SQL→opportunity rate, win rate, revenue per lead, sales cycle length, and indicators of capacity strain (especially response time).

I also set stop conditions in advance. If high-intent win rate materially drops, if response time degrades enough to jeopardize fast follow-up, or if AE feedback signals a clear quality collapse, I pause and review rather than pushing through.

The goal is to turn opinions about quantity vs quality into small, low-risk experiments that produce clean learning. For a broader set of GTM decision loops in this style, see the SaaS Growth Acceleration Framework.

SEO Lead Generation

Lead source is often the missing piece in this debate. The quantity vs quality trade-off looks very different when a larger share of demand comes from SEO, because organic traffic can range from high-intent buyers to early-stage researchers.

In B2B services, I generally expect higher intent from searches that include strong qualifiers (industry, specific service need, pricing, outsourcing language, or explicit vendor-type terms). Mid-intent queries often signal evaluation and comparison. Top-of-funnel queries tend to be definitions and broad education. None of these are “bad” - they just carry different expected conversion profiles and should be handled differently in scoring and nurture.

If you want a clean way to operationalize that, use a simple intent model and tie it directly to scoring and nurture tracks. (Related: Search intent taxonomy for B2B: a practical model beyond TOFU, MOFU, BOFU.)

The practical implication is that I measure SEO on quality, not traffic. I look at how organic sessions translate into form fills or requests, how SEO leads convert from MQL to SQL relative to other sources, and how win rate and payback behavior compare once those leads reach opportunity. When setting targets, it also helps to separate expectations for brand vs non-brand demand (see Brand vs Non-Brand in B2B Search: How to Set Targets That Don’t Lie).

Often, bottom-of-funnel pages produce fewer total leads than broad guides, but materially better SQL rates and win rates. Pages built for vendor evaluation tend to do especially well here (see From Website to Shortlist: Designing Pages for Vendor Evaluation). When SEO is aligned to ICP and buying stage, it becomes easier to manage the quantity vs quality tension because volume growth is less likely to come with a hidden conversion penalty. A revenue-first internal linking model can help concentrate that intent where it converts (see B2B SEO Internal Linking: A Revenue-First Model for Service Sites).

If you want a follow-on perspective on fixing an unhealthy pipeline at the rep and team level, this post is a useful complement to the qualification and capacity lens above.

Quickly summarize and get insighs with: 
Andrew Daniv, Andrii Daniv
Andrii Daniv
Andrii Daniv is the founder and owner of Etavrian, a performance-driven agency specializing in PPC and SEO services for B2B and e‑commerce businesses.
Quickly summarize and get insighs with: 
Table of contents