Signal Analytics – The Definitive Guide (Part II)

Signal Analytics – The Definitive Guide (Part II)

To briefly review from Part I – Being able to accurately predict when a lead is in-market and "sales-ready" is a key part of Marketing's function and is central to managing an effective Sales Team.

We'd all like a magical Crytal Ball that can precisely predict the future of each prospect. Unfortunately, that's a fantasy.

Marketers have instead relied on "Lead Scoring" to make wild guesses about when a lead should be declared an "MQL" and be tossed over the wall to Sales.

Traditional "Lead Scoring" is nothing more than numerology and guessing using made-up "scores" and often provides WORSE predictions than flipping a coin.

Guessing is a terrible way to predict the future!


The MQL is not Dead...the Problem is How They're Scored

People complain that MQL's are flawed or dead. This could NOT be further from the truth! Sales still needs Pipeline, and Marketing remains the most effective and efficient source for that.

The problem is NOT with the MQLs...the problem is with the completely failed Lead Scoring process based on guessing and made-up point-based scoring nonsense.


This Newsletter is Sponsored by:

The Marketing Metrics Newsletter is Sponsored by Proof Analytics


Predictive Models Are The Solution

What does work are Predictive Models that use historical data and a deep understanding of marketing processes and how buyers make buying decisions.

Something that looks a bit more like this:

But the problem is that there are no perfect predictions; the level of unknowns and randomness in the world prevents us from ever knowing the future perfectly.

An imperfect model can still be perfectly suitable!

What's critical to understand when evaluating the use of such Predictive Models (even if it's just flipping a coin or, worse still, doing "Lead Scoring") is just how imperfect the results are.


How Often Does the Scoring System Make Mistakes?

Sadly, the real world is filled with uncertainty and randomness; even with perfect data, we can never make perfect decisions.

It's important to understand how frequently our system mistakes someone who's not in-market and not ready to buy for a sales-ready lead and how often we miss good prospects because our scoring system fails to identify them as sales-ready?

This is the so-called problem of "false positives" and "false negatives".

So, let's start with a few important definitions:

  • True Positive – correctly identifying a "high intent" buyer

  • True Negative – correctly identifying a "low intent" buyer

  • False Positive – mistakenly identifying a "low intent" buyer as high intent

  • False Negative – mistakenly identifying a "high intent" buyer as low intent


MQLs via Traditional "Lead Scoring"

This isn't just an academic exercise! The issue of false-positives and false-negatives has serious business consequences.


A high rate of "false positive" signals leads Marketing MQLs to contain an overwhelming fraction of "leads" that are not in-market and not ready for a sales conversion.

This is typically between 90% to 95% of all MQLs! This dramatically reduces the effectiveness of the Sales Team (the most expensive asset a B2B company has) because they have to sift through a giant pile of worthless leads to find the few that are actually Sales-Ready Opportunities.

Sales teams have created entire new departments (the Sales Development Reps or SDRs) just to sift through the garbage heap of MQLs.

The extremely low MQL to Close-Won rate is an indictment of the traditional Lead Scoring process using a made-up "points system"

What's needed isn't something to replace the MQL. What's needed is a better way to evaluate when a prospect is sales-ready and worthy of being declared an MQL and passed over directly to a Sales Rep without having to go through an expensive SDR-based lead-qualification process.


Likewise, a high rate of false-negatives (which is seldom noticed but often also an outcome of traditional Lead Scoring practices) is even more damaging to the business.

If we're throwing away leads because we don't recognize that they're in-market and ready to buy, this directly affects the business's ability to grow and be profitable.

We need ACCURATE models that can reliably score prospects into the correct buckets of high-intent vs. low-intent.


What is Accuracy?

So understanding how Predictive Models perform in terms of their scoring Accuracy is a critical starting point.

It turns out that "Accuracy" isn't some vague hand-wavy term, it has a very precise mathematical definition.


To Explain, Let's Start with a Very Simple Model

We have prospects with "High Buying Intent" and ones with "Low Buying Intent", but since we don't have a magical Crystal Ball, we don't really know who is who.

So, our Predictive Model will take its best shot and will correctly identify some and incorrectly identify others. We can put this into a 2×2 Matrix like this:

In our examples going forward, let's say that Marketing has 100 Prospects total that need to be scored. After running them through our Predictive Model, we get the following:

But we know this isn't completely true! Some of those 30 high-intent prospects are "false-positives" (prospects not ready to buy) and some of those 70 low-intent prospects are "false-negatives" (prospects hot to buy, but they got missed).


For the sake of all the following explanations, let's assume we can have a God's Eye View of reality and know exactly which of these leads have been correctly and incorrectly scored.

With our omniscient eye, we can see the following:

Only 10 of those high-intent prospects are actually "True Positives" (TP), meaning they actually have high buying intent. The remaining 20 are "False Positives" (FP) with low buying intent but having been incorrectly scored.

And only 60 of those low-intent prospects are actually low-intent "True Negatives" (TN), but 10 have high buying intent but were incorrectly scored and are "False Positives" (FN).


What is "Accuracy"?

Accuracy is simply a measure of how often the Predictive Model gets it right! How often did the test make a correct decision about a prospect actually being high-intent or low-intent?

So, we can immediately use "Accuracy" to compare against other ways to score the same prospects. For instance, just flipping a coin to determine whether a prospect should be declared an MQL (still better than "Lead Scoring"!)

With coin-flipping, we get this:

As expected, flipping a coin should be 50:50 heads or tails, and the "Accuracy" metric correctly identifies that. So, we know we're on the right track!

Accuracy is a good starting point, but it doesn't tell the whole story. We need a few more concepts and a few more terms for that.


But Wait...There's More!

Accuracy provides an initial estimate of how good our scoring system is, but it fails to answer a number of critical questions about how often we are mis-scoring leads, such as:

  • What proportion of high-intent buyers are mis-scored?

  • What proportion of low-intent buyers are mis-scored?

  • How often are we sending a bad lead over to the Sales Team?

  • How often are we missing prospects who are ready to buy?

  • And is there a better overall way to determine how "good" this Predictive Model is, especially in terms of comparing it against 1) just flipping a coin, or 2) other potentially better models?

Each of these questions has a precise answer that we can estimate! In Part III of this series, we will examine the answers to each of these questions.


The Marketing Metrics Newsletter is Sponsored by Proof Analytics


Michael Newman

Growth Marketer | GTM Advisor | Marketing Consultant

3mo

I think the only other profession where a low success rate (1.6% conversion MQL to WIN being success) has been accepted is Meteorologists. And if we somehow exceed 2% thereby beating the "benchmark", we have achieved greatness. Dale W. Harrison - do you have any historical data showing the success of this approach compared to the standard MQL scoring?

Excellent write-up, as usual. I can’t wait to read the next chapter. While done by well-intentioned folks, lead scoring misses nuances separated by degrees. There are many times the digital marketing team will display MQLs achieved, and I cringe because it’s clear that based on the prior period’s performance, these numbers will never convert at scale to bookings. And prospects who buy travel through content associated with the funnel in various ways. Some should lose their MQL status in the next hour. I like the idea of predictive analytics, perhaps applying something like ML decision trees, but I think you need a great deal of data to be meaningful. The data is hardly static with the ever-evolving product market fit and vast speed of SaaS releases amongst competitors. So, how do we introduce better predictive capabilities given the variability in data set sizes and fast-changing markets that lead to changing paths prospects take? I find it valuable, for what it’s worth, to tune into SDR feedback constantly and provide an approximate analog to the working triggers.

Pavel Cahlík

Jsem na značky a nestydím se za to! 🤘😉 Takže jsem brandový konzultant a stratég značky ✅ Poslechnout si mě můžete třeba v podcastu 🎙️ BrandHunters 💪

3mo

Very informative and helpful. Thank for sharing this!

Like
Reply
Marc Binkley

Proven Fractional CMO | Executive Advisor | Speaker | Evidence-Based Training for Marketers

3mo

Great article Dale. I agree that we may not need to reinvent the MQL but isn't a true positive another way of saying sales would approve of the lead ?

Shabahat Ali 🗣️

Marketing & Import Export

3mo

Great read, reminds me of the Rory sutherland quote "the problem with all data is that it comes from the past".

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics