Most lead scoring models fail not because the math is wrong, but because they score too many things, never decay, and set MQL thresholds that nobody trusts. Start with fit scoring, layer in engagement carefully, and audit ruthlessly every quarter.

Lead scoring model is a system that assigns numerical values to leads based on who they are (fit) and what they do (engagement), used to prioritize sales outreach and define marketing-qualified leads (MQLs). See our full lead scoring glossary entry.

Why most lead scoring models don't work

Here's a stat that should bother you: according to SiriusDecisions, fewer than 25% of leads passed to sales are actually followed up on. And companies with lead scoring? Many report no improvement in conversion rates after implementation.

The problem isn't the concept. Lead scoring works. The problem is that most models are built once, tuned never, and trusted by nobody. Sales ignores MQLs because they've been burned too many times by leads with high scores and zero intent. Marketing blames sales for not following up. RevOps sits in the middle wondering why they spent six weeks building something everyone hates.

Let's fix that.

Fit scoring vs engagement scoring: two different jobs

Every lead scoring model has two components, and confusing them is the first mistake.

Fit scoring (firmographic)

Fit scoring answers: "Is this the type of company and person we sell to?" It's based on static attributes.

  • Company size: Revenue range, employee count, or both. Be specific. "50-500 employees" is useful. "SMB" is not.
  • Industry: Your ICP should define 3-5 target industries. Score them differently. A SaaS company selling to healthcare is a different lead than one selling to retail.
  • Title and role: VP of Sales is a decision maker. Marketing coordinator is not. Score accordingly, but don't over-index. Titles vary wildly across companies.
  • Geography: If you only sell in North America, a UK lead gets zero fit points regardless of everything else.
  • Tech stack: If you integrate with Salesforce and they run HubSpot, that's a fit signal (positive or negative depending on your product).

Fit scoring should be binary or tiered, not granular. A lead either fits your ICP or doesn't. Three tiers work: strong fit, partial fit, no fit. If you're assigning 47 different fit score values, you're overcomplicating it.

Engagement scoring (behavioral)

Engagement scoring answers: "Is this person showing buying intent?" It's based on actions.

  • High-intent actions: Pricing page visits (5-10 points), demo requests (25+ points), case study downloads (10 points), product comparison pages (8 points)
  • Medium-intent actions: Blog visits (1-2 points), email opens (0.5 points), webinar registration (5 points)
  • Low-intent actions: Social media engagement (0-1 point), unsubscribe page visits (negative points)

The key insight: not all engagement is created equal. Someone who visits your pricing page three times in a week is showing more intent than someone who's opened 40 newsletters. Your model should reflect that.

The 5 mistakes that kill lead scoring models

1. Scoring everything

If every action adds points and nothing subtracts them, every lead eventually becomes an MQL through sheer accumulation. A newsletter subscriber who's opened 100 emails over two years is not a hot lead. They're a reader. Score only actions that correlate with closed-won deals.

2. Never decaying scores

A lead who was active 6 months ago and has gone silent is not the same as a lead who's active now. Implement time decay. Common approach: reduce engagement scores by 25-50% every 30 days of inactivity. Marketo and HubSpot both support score decay natively. Salesforce requires custom automation.

3. MQL thresholds that nobody trusts

If you set the MQL threshold at 50 points and sales rejects 70% of MQLs, your threshold is wrong. This isn't a sales problem. It's a model calibration problem. Work backwards from SQLs: what score did leads have when sales accepted them? That's your real threshold.

4. Building in isolation

Marketing builds the model. Sales ignores it. The classic pattern. The fix is simple: build the model with sales input. Have sales rank 20 recent leads from "best" to "worst." See if your model produces the same ranking. If not, your model doesn't reflect reality.

5. Set it and forget it

Your ICP shifts. Your product evolves. Your market changes. A scoring model from 12 months ago is probably wrong today. Quarterly audits aren't optional. They're the entire point.

How to audit an existing lead scoring model

If you've inherited a scoring model (and based on 455 current RevOps postings, many of you have), here's the audit framework.

  1. Pull the conversion data. Export every MQL from the last 6 months. Track which ones became SQLs, opportunities, and closed-won deals. Calculate conversion rates at each stage.
  2. Find the score sweet spot. What score range has the highest MQL-to-SQL conversion rate? That's your real threshold. If MQLs at 50 points convert at 15% but MQLs at 80+ convert at 45%, you know where the line should be.
  3. Identify scoring inflation. Look at leads with high scores (100+) that never converted. What inflated their scores? Usually it's email opens, repeat blog visits, or events that don't correlate with buying intent. Remove or reduce those scoring rules.
  4. Check for fit scoring gaps. Are bad-fit companies reaching MQL status through engagement alone? A 10-person startup that downloads every piece of content you publish isn't a qualified lead if your product costs $50K/year. Fit should be a gate, not just a component.
  5. Validate with sales. Show sales your top 20 MQLs by score. Ask them to rank them. If the ranking diverges significantly from the score, your model needs recalibration.

When to rebuild vs patch

Patch when:

  • The model mostly works but a few scoring rules are off
  • MQL-to-SQL conversion is above 30%
  • Sales trusts the model conceptually but wants adjustments
  • You've been auditing regularly and caught the drift early

Rebuild when:

  • MQL-to-SQL conversion is below 15%
  • Sales actively ignores MQLs ("I just work my own list")
  • Nobody can explain why the thresholds are what they are
  • Your ICP has fundamentally changed (new market, new product, acquisition)
  • The model hasn't been audited in 6+ months and there's significant scoring inflation

Implementation: the minimum viable scoring model

If you're starting from scratch, don't build a complex model on day one. Start with this:

  1. Define 3 fit tiers. A = strong ICP match (right industry, right size, right title). B = partial match. C = outside ICP.
  2. Pick 5 engagement signals. Only five. Choose the ones that historically correlate with closed deals. Pricing page, demo request, and 3 others specific to your business.
  3. Set a simple threshold. A-fit + 2 high-intent actions = MQL. B-fit + 3 high-intent actions = MQL. C-fit = never MQL.
  4. Run it for 30 days. Measure MQL-to-SQL conversion. Adjust.
  5. Add complexity only when the simple model's gaps become clear. Not before.

You can build this in HubSpot, Salesforce, or Marketo in a day. The value isn't in the tool. It's in the scoring logic, the quarterly audits, and the alignment with sales.

What the data tells us

Based on 455 open RevOps roles, "lead scoring" appears in roughly 30% of job descriptions. It's not a nice-to-have anymore. It's an expected competency. If you're in RevOps and you haven't built or audited a scoring model, this is the quarter to start.

For more on the skills that define RevOps careers, see our career path guide and the salary data that backs it up.

Related Articles

Playbook

The RevOps Tech Stack Audit: How to Evaluate Every Tool You Own

Market Report

RevOps Market Snapshot: 455 Roles, $179K Average

Salary Analysis

RevOps Salary Guide: What the Data Actually Shows

Get Weekly RevOps Intelligence

Salary benchmarks, tool reviews, and job market insights for revenue operations leaders. Every week.