Etavrian
keyboard_arrow_right Created with Sketch.
Blog
keyboard_arrow_right Created with Sketch.

What Your 'Junk' SaaS Leads Are Really Telling You

11
min read
Dec 28, 2025
Minimalist SaaS lead quality dashboard showing scoring sliders pipeline and analyst converting junk leads

You wake up to dashboards full of new trials, demo requests, and newsletter signups. Marketing points to record inbound volume. Yet the pipeline barely moves, CAC creeps up, and reps complain that “these leads are junk.”

When I see that pattern, I rarely assume there’s a top-of-funnel problem. I assume there’s a SaaS lead scoring problem.

Done well, lead scoring turns a noisy pile of signups into a clear order of operations: who deserves a human follow-up now, who should be nurtured, and who should be disqualified. The value isn’t the math - it’s that the scoring rules tie directly to outcomes like pipeline created, win rate, and payback.

SaaS lead scoring in the AI era

At its core, SaaS lead scoring means assigning points (or a probability) to each lead or account based on two things: fit (how closely they match the ideal customer profile) and intent (how strongly they behave like a buyer). High scores mean “work this now.” Low scores mean “nurture” or “not a priority.”

I treat scoring as a decision system, not a reporting metric. If the score does not change what sales does today - routing, sequencing, SLAs, and prioritization - then it’s not really a scoring model. It’s just a number in a CRM field.

Lead scoring also works best when it’s explicit about what it cannot solve. It won’t fix weak positioning, a broken pricing model, or a poor handoff. What it does do is stop skilled reps from spending prime hours chasing low-value signups while high-intent accounts sit in a generic queue.

Why lead scoring matters in B2B SaaS

In most B2B SaaS funnels, only a minority of inbound leads are truly sales-ready at the moment they appear. That gap creates predictable friction: marketing celebrates volume, sales experiences drag, and leadership responds by pushing more spend - often making the noise worse.

Without a clear SaaS lead scoring model, I typically see the same failure modes repeat: reps cherry-pick based on gut feel, good leads go stale because they “look small” at first glance, and the organization ends up debating definitions instead of improving conversion.

Scoring becomes even more critical because SaaS buying journeys create “false positives” and “false negatives” constantly. Trials that never activate look like demand. Buying committees move slowly and touch many assets before they’re ready. Product-led motions create lots of early users who may or may not have budget. A shared scoring model gives everyone one language for “hot,” “warm,” and “not now,” so follow-up is consistent and measurable - and it supports the kind of Sales and marketing SLA that makes follow-up happen.

Traditional vs modern SaaS lead scoring

Classic B2B lead scoring was designed for simpler funnels: one form fill, one buyer, one purchase. Modern B2B SaaS lead scoring has to handle recurring revenue, multiple stakeholders, and product usage signals that often predict revenue better than email clicks.

The terminology also expands:

  • MQL (marketing qualified lead): fits the ICP and shows early engagement.
  • SQL (sales qualified lead): ready for a direct sales conversation, based on stronger intent and/or discovery.
  • PQL (product qualified lead): shows value in-product (especially in PLG), often after hitting usage milestones.
  • Account-level scoring: scoring the company, not just a single contact, because buying is collaborative.

Here’s the difference I keep in mind when designing the model:

Aspect Traditional lead scoring Modern SaaS lead scoring
Data used Firmographics + email engagement Firmographics + role + web behavior + product usage + (optional) intent
Sales motion Single decision maker, one-time deal Committees, trials, renewals, expansions
Strengths Easy to set up Aligns better with real buying signals
Main limitation Misses product + account context Requires cleaner data and cross-team alignment

One of the most common mistakes I see is copying a generic scoring setup from a CRM and assuming it fits. Scoring has to match the reality of the business: ICP, ACV, sales capacity, and motion (sales-led, PLG, or hybrid). If you need a tighter definition of ICP inputs, start with b2b saas keyword-to-icp alignment and work backward from closed-won.

A simple lead scoring model I use as a starting point

I don’t need a data science team to get a useful first version. I need clarity, a small set of signals, and a way to validate against outcomes.

Here’s the sequence I follow:

  1. Define ICP and the buying committee (based on wins)
    I start with closed-won deals, not opinions. I document target industries, company-size ranges, regions, and deal profiles I actually want more of. Then I map who tends to show up: user, champion, economic buyer, and common blockers.
  2. List signals that correlate with revenue (not clicks)
    I look for patterns across recent wins and losses: which titles show up, what firmographic ranges close fastest, which pages appear before a sales conversation, and which product actions predict conversion or expansion.
  3. Assign points and set clear thresholds
    I use positive points for fit/intent and negative points for red flags (like personal email domains, tiny orgs if that’s outside ICP, or clear disqualifiers). Then I set bands that map to action - routing to sales, nurture, or deprioritize.

A small example scoring table might look like this:

Attribute or behavior Score
Target industry +15
Company size 50-250 employees +10
Director/VP title in target function +10
Visited pricing page twice in 7 days +20
Started free trial +25
Invited 3+ teammates +15
Personal email domain -15
Company size under 5 employees -20

In practice, I care less about “perfect” weights and more about consistency. If the rules are clear, I can test them quickly against reality and iterate. If you want a spreadsheet-ready starting point, Download our free lead scoring template now.

What a practical scoring template looks like (with examples)

When I build the first version, I usually separate scoring into four buckets and roll them into one total: ICP fit, engagement behavior, product usage (if applicable), and negative signals. That structure keeps the model readable and makes it easier to debug when sales disagrees with the output.

To make the impact tangible, I like pressure-testing the model with two extremes.

Example 1: strong fit + strong intent
150-employee company in the target region. A VP in the target function requests a demo. The account returns to pricing multiple times and invites teammates during a trial. This type of lead should naturally rise to the top because the model rewards both fit and coordinated intent.

Example 2: low fit + weak intent
A solo consultant signs up with a personal email address, downloads one early-stage asset, and shows no product activation or pricing interest. This should fall into nurture or disqualification, without consuming rep time.

If I can’t explain, in plain language, why the first example scores high and the second scores low, I don’t consider the model ready to implement.

The lead scoring attributes that usually matter most

I can track hundreds of signals, but I’ve learned to start with a small set that tends to matter across many B2B SaaS motions. I aim for roughly 8-12 high-signal inputs and expand only when I can prove incremental value.

Here’s a compact view of the categories I prioritize and what I typically measure:

Category What I track Why it matters
Firmographics industry, company size, region, funding stage, basic tech-stack fit Predicts fit, ACV potential, and supportability
Role & seniority function, decision authority, proximity to budget Predicts buying power and deal velocity
Web intent pricing/comparison visits, integrations page, case studies, repeat sessions Captures active evaluation behavior
Product usage (PLG/trials) onboarding completion, “aha” event, teammate invites, sustained activity Often the strongest predictor of conversion
Negative signals personal email domains, off-ICP segments, inactivity, spam complaints Saves rep time and reduces false positives

I also keep a strict rule: if a signal doesn’t help me predict pipeline or revenue within a quarter or two, I remove it. A smaller model that the team trusts will beat a complicated model that nobody follows. If you’re formalizing intent signals, an explicit taxonomy helps - see b2b search intent taxonomy.

Rule-based scoring vs AI (predictive) scoring

Rule-based scoring is transparent: “VP Operations at 200-500 employees” gets a defined score, “pricing page return visits” add points, and obvious red flags subtract points. I like it because it’s explainable, quick to implement, and easy to tune when strategy changes.

Predictive (AI) lead scoring estimates the likelihood to close using historical CRM and behavioral data. In the best cases, it finds patterns I wouldn’t think to encode manually - like a specific product feature usage sequence that predicts purchase better than pricing-page traffic. If you want a deeper walk-through of how predictive models typically work (and when they fail), it’s worth reading before you buy a tool.

Factors is an AI-powered account intelligence platform
AI-based scoring can add signal, but only if data quality and explainability are strong enough for sales to trust.

In practice, I see predictive scoring work best when three conditions are true: there’s enough historical volume to learn from, the ICP hasn’t been changing every quarter, and tracking across CRM, website, and product is reliable. The main failure modes are also predictable: messy data creates noisy predictions, “black box” scores reduce rep trust, and past patterns can pull the business back toward yesterday’s ICP when leadership is trying to move upmarket.

The most reliable approach I’ve seen is hybrid: I keep a clear rule-based baseline, test predictive scoring as an additional input, and make sure the model can surface “why” (top contributing signals) so sales can sanity-check it.

Where lead scoring usually lives (without creating tool sprawl)

I don’t think “best software” is the right framing. The practical question is: where can the score live so sales actually uses it, and where can the data reliably flow?

Most teams land in one of these setups:

  • CRM-native scoring when the funnel is simple-to-moderate and most signals are already in the CRM.
  • Marketing automation scoring when email and web behavior drive long nurture cycles and the CRM needs clean handoff rules.
  • Product analytics-driven scoring when in-app events define readiness (common in PLG), with the resulting score pushed into the CRM for routing and reporting.
ActiveCampaign an end-to-end marketing automation platform
Many teams run scoring in marketing automation, then push the score into the CRM for routing and reporting.

Whatever the stack, I focus on one operational outcome: high-score leads must be routed correctly, quickly, and consistently. If the score doesn’t change follow-up behavior, the implementation is incomplete.

Mistakes I avoid (and how I keep the model healthy)

Lead scoring usually fails for organizational reasons, not technical ones. The biggest issues I watch for are: overcomplicated models that nobody can explain, “set and forget” scoring that drifts as the market changes, and building the model without real sales feedback.

To keep scoring healthy, I rely on a small set of habits:

  • I review the model at least quarterly (and immediately after major changes like entering a new segment, launching a new tier, or shifting upmarket).
  • I backtest score bands against outcomes: pipeline created, win rate, cycle length, and retention/expansion where relevant.
  • I tie score thresholds to clear service levels so the team knows what “high priority” actually means in response-time terms.
  • I enforce negative scoring and disqualification rules so low-fit leads don’t quietly consume rep capacity.

On timeline, I don’t expect miracles overnight, but I also don’t treat this as a year-long science project. A clear first version can improve focus within weeks by prioritizing follow-up and reducing wasted outreach. The deeper gains - better thresholds, better weights, better alignment between MQL/PQL/SQL and revenue - typically show up over one to three quarters as the model is tuned against real closed-won and closed-lost data.

Two external data points underline why this work pays off. Sales teams can lose up to 40% of their time to low-value activities, and 70% of leads are lost when follow-up is inconsistent or late. If you want a practical lever that typically improves conversion fast, start with response-time discipline - see Shipping and returns speed in your ads: when it helps for a framework you can adapt to lead SLAs.

Quickly summarize and get insighs with: 
Andrew Daniv, Andrii Daniv
Andrii Daniv
Andrii Daniv is the founder and owner of Etavrian, a performance-driven agency specializing in PPC and SEO services for B2B and e‑commerce businesses.
Quickly summarize and get insighs with: 
Table of contents