All-in-one platform to power GTM

Talk to our team to find out how sales teams at Asana, Miro, Canva, and ClickUp are streamlining prospecting & account research with Pocus.

Product-Led Sales (PLS) AMA: Clearbit's Lead Qualification Engine

Featuring Julie Beynon, Head of Analytics and Colin White, Head of Demand Gen at Clearbit.

Alexa Grabell
April 26, 2022
Product-Led Sales (PLS) AMA: Clearbit's Lead Qualification Engine

Alexa, CEO of Pocus, hosts Product-Led Sales (PLS) AMAs with PLS experts to share best practices, frameworks, and insights on this emerging category. These AMAs are an opportunity to ask PLS leaders any question — ranging from hiring to sales compensation to tech stack — in a low-key, casual environment.

The PLS AMAs are for members of the Product-Led Sales community, the go-to-place to learn, discuss, and connect with go-to-market (GTM) leaders at product-led companies. The goal of the community is to bring together the most thoughtful and innovative GTM leaders to build the next generation of sales together.

Interested in joining? Request an invite here.

Now keep reading for a recap of what we discussed in this week's AMA chat.

Meet Clearbit's Julie and Colin 👋

Julie Beynon is Head of Analytics, and Colin White is Head of Demand Generation at Clearbit, a data activation platform that helps companies grow faster and smarter with real-time marketing intelligence. 

Julie was originally in a marketing ops role when she started dabbling in analytics, teaching herself to get the data she needed to do her job. And then she fell in love. Now Julie focuses mostly on analytics engineering at Clearbit, spending her days thinking about how to operationalize all the data that has been previously stuck in their warehouse. 

Colin began his career in software engineering but gradually moved on to analytics, marketing ops, and performance marketing. Now, his role heading up demand gen at Clearbit has him working closely with the data Julie produces to gather insights and put those insights into action across marketing and SDR and BDR GTM teams. 

Together, they've worked to build a best-in-class lead qualification system at Clearbit — and written the definitive playbook on the topic. 

In this recap of our AMA with Julie and Colin, we talk about: 

  • How Clearbit's lead qualification engine has matured, yet simplified, over time 
  • The process Clearbit uses to operationalize lead scoring
  • Where Clearbit has created a failsafe to keep quality leads from slipping away
  • Why getting buy-in from GTM teams is just as important as using data to develop ICPs 

Building the Lead Qualification Engine at Clearbit 👷

The lead qualification process that Clearbit uses today didn't just pop up overnight. There were three distinct phases of its development. 

Phase 1: Building the Initial Scoring Model with Limited Data

With so much data (Clearbit is a data enrichment tool, after all), you would think Julie's first pass at the lead scoring model would have been a massive data project. Turns out, it was quite the opposite.

Julie and Colin walked us through their first attempt at a lead scoring model, which surprisingly did not require massive data. The initial goal of the model was to identify key actions to inform sales, not prioritization, so a smaller data set was still useful. 

It turns out that the scoring process can be built around as little or as much data as you have access to. What matters most is using what you have to identify ICPs as accurately as possible. 

For Clearbit, their early lead scoring system focused on three basic elements

  • User data they had on intent, behavior, and firmographics
  • Feedback from sales on what was converting
  • What had historically performed well

They used Madkudu (and still do) to handle firmographic scoring to tell them whether a lead was good quality or not. Their dbt+Census models looked for activity. Their first version of this system was either an event was passed through, or it wasn't.

As you'll see, what they eventually learned from this original iteration was that, in phase two, they needed to simplify the model.

Julie: "We built our scoring model on intent, behavioral data, and firmographic data. We started by scoring from zero to 100. There was no actual data science in our first pass because we didn't have any historical data. So, we just went off of what offers were converting at a higher rate and which ones the sales team told us were converting at a higher rate. 

"It was a mix of a little bit of data, a little bit of feedback from the team, and an understanding of what had historically performed well. We scored, somewhat arbitrarily, based on that. 

"The first pass was just about using as much data as we had, which was not a ton, and then relying on insight from the sales team. We used MadKudu for our firmographic scoring, so we basically had these 'medium,' 'good,' and 'very good' rankings. We looked for a combo of a medium+ ranking and a score of 100. It was binary — there were four activities that hit 100." 

Phase Two: Using Feedback to Simplify and Make Improvements

After much time spent building and perfecting their lead qualification engine, Julie and Colin eventually realized they'd built a program that used too many attributes and too many events, which didn't really do any better than fewer attributes and events. 

So they simplified their model by moving to a binary scale and a "good enough" firmographic rating when creating a lead qualification process.

Julie: "In our first version, we built in all this logic to score and triage leads, but then the sales team was like, 'We just need all of them.' So we actually reverted back to whether a lead was good enough on our binary scale and good enough when it came to firmographics. This is where I'd recommend most people start.

"It's so tempting to pick seven to eight attributes and run this incredibly complex model. But what we found was A. no one used it, and B. it was far above where we actually needed to be. We just needed some kind of 'yes' or 'no' on if a lead was good enough and if they had done enough for our sales team to be able to engage." 

There was only one thing missing — a feature that would make sure this simplified approach wasn't automatically tossing out leads who didn't score well but really were interested in the product. That realization helped define the model they use today. 

Phase 3: How Lead Scoring Looks Today

Today, their process is streamlined but powerful: MadKudu plus a few failsafes help make sure they're engaging all the qualified leads that come through their digital doors. 

Colin: "Originally, it was very binary. We had maybe one piece of firmographic data that we were looking at to determine a good fit or bad fit. But now that we use MadKudu, we know they're looking at such a breadth of different scoring fields and different engagements that now we feel fairly confident in that score. That is actually now our main scoring system."

Failsafe processes — questionnaires, surveys, etc. — supplement automation and help Clearbit re-engage with high-quality leads that may not have scored well initially. 

"Because our scoring model is automatic and based on machine learning, we also have triggers for folks who don't get scored well. They can raise their hands and showcase that they're a good fit. To do that, we use things like questionnaires or surveys or responses to specific emails. All of these are potential triggers to get them back in the door if they're a good fit."

Creating Failsafes and Operationalizing Lead Scores

Now for something we're all curious about: what does it look like to operationalize lead scoring?

At Clearbit, they employ a three-step system for making sure no great leads get left behind

  1. A suite of integrated tools gets high-scoring leads in front of sales quickly.
  2. A secondary lead scoring system runs in the background as a failsafe that makes sure no quality leads slip through the cracks. 
  3. A nurture campaign enables leads to self-select back into the workflow and raise their hands if they want to be contacted by sales.

Julie: "All of our events are tracked through Segment, then they move through Tray. Within minutes of leads coming through, they're in Salesforce with all the data on what they've done. From there, MadKudu almost immediately updates their score, and then our routing checks 'Have they done what they need to do, and is there firmographic data where it needs to be?'

"Secondarily, as Colin mentioned, we also have a lot of fail-safes because it's hard to admit as someone in the data world, but data's not perfect. If you're relying on data as the only way to route leads, you're missing so many quality leads.

"So we have the ability to prioritize leads that maybe didn't necessarily reach a score of 100. There's additional background scoring that's happening on every single one of those leads to surface higher quality leads that weren't scored properly on the first pass.

"And we've added in email nurturing campaigns to allow people to self-select back in. We have an engagement scoring model that runs using events and segments. Then we model it in dbt. For each contact that we have in our database, we're keeping a score of what they've done, and that accumulates over time.

"So, there's the first pass to 'speed to lead' the obvious fits. Then the failsafe and the system for the hand-raisers make sure we're catching everybody. If we relied on one system, we'd likely miss some leads that our sales team could follow up with."

Data Isn't Perfect: The Art and Science of Scoring 🎭

At Clearbit, they're careful about making sure to get insight from folks on the frontlines of GTM (sales and customer success) before changing how the ICP is defined. 

Why? Because, as Julie mentioned — data isn't perfect. There's both art and science to scoring leads. In fact, selling B2B enterprise SaaS is much more of an art than Julie originally expected. Data is powerful for identifying attributes and trends, but it's easy to over-engineer and still get it wrong when data is all you're thinking about. 

The hard truth is that when art is at play in sales, there are always elements of the process that simply can't be scaled.

Clearbit has found the best lead scoring approach contains both art and science: 

  • The science: collecting and analyzing data, codifying the sales process where possible to better identify outliers
  • The art: getting into the mix and working directly with sales to understand their processes

Julie: "Once I did an outbound list for our sales team, and they sent it back and basically said, 'This is crap.' So we took that list, and I watched them go through it, go to each website, and show me why it wasn't a good lead. They were able to show me all the things I couldn't track, and I was able to update our attributes with that insight — insight that we weren't getting from the data conversation we were having. This helped us realize we needed to continue to add failsafe.

"The truth is, it's manual. I tried so hard to automate it. I used every different attribute and every filter I could to get that perfect calculation of ICP. But every time, sales would say, 'No, not touching this one.' So I finally had to ask them to show me.

"The way you score leads becomes different when you start to understand how sales actually handle them. Buy-in is a step you can't miss. It's probably the most important step. The tech behind lead qualification is obviously becoming better, but getting the alignment with sales and marketing — that's core."

Colin: "From the inbound side, I think this is why we have gone to more broad criteria. We send more people to our sales team because a lot of the things that make a good customer, we can't identify automatically.

"It's a lot of art. You can do as much as you want on the data side, but you can also over-engineer it if you just look at the data and kind of end up shooting yourself in the foot."

Time to Talk Tactics

We love hosting AMAs because we get a mix of strategic and tactical questions from the community. Let's wrap up this recap on a more tactical note to give you some things you can walk away with and try at your own organization.

Use Surveys to Allow Leads to "Self-Select" Into the Sales Process 

Following a question asking for more details on how Clearbit uses surveys to qualify intent, Julie and Colin broke down this core failsafe measure. 

Initially, leads are scored as low, medium, good, or very good as soon as they enter their email into one of Clearbit's website forms. Those that end up in the low category aren't considered qualified but are automatically served a survey that asks a few questions to help determine whether they'd be a good fit for Clearbit's products. If their answers indicate that yes, they'd be a good fit, their information is surfaced to Clearbit's SDR team. 

This is another example of art mingling with science in the sales motion. The process gives sales the survey data they need to have a personalized conversation with leads, with no need to run to automation where a human could do a better job. 

However, that doesn't mean that data collected during this survey process is exempt from the data operationalization efforts at Clearbit. 

Colin: "At the end of the quarter, we look at our sales metrics and think about where these 'mis-scores' fit in our scoring groups. If we're seeing 15% of our overall sales came from leads we scored as low, then we're going to take a deeper look at our scoring there and use those insights to update our model."

To Filter Personal Emails, or Not to Filter Personal Emails? 📨

This next piece of advice from Colin is in response to a PLS community member asking our guests their thoughts on using email domains to qualify leads

Colin: "Webinar is a big channel for us, so we ran a test on our webinar forms specifically that restricted attendees to those using a business email only.

"What we ended up seeing was that the overall quality of lead was higher, but we actually got fewer qualified leads because we weren't letting personal emails in. So, even though the conversion rate from total lead pool to quality lead was higher, the absolute number of qualified leads was becoming lower.

"It's tough, but I think there are better mechanisms for identifying qualified folks than email addresses. Of course, it very much depends on your business and the type of content that leads are engaging with. Webinars, ebooks, and anything that you're gating, maybe you don't want to restrict emails. But for demo requests or free trials, it's possible you could consider restricting email addresses there."

We Hope to See You at Our Next PLS AMA 🔮

To attend an AMA discussion and get your burning questions answered, request to join Pocus' PLS community! There, you'll be able to enjoy daily PLS discussions, job postings, and Q&A sessions between AMAs.

Alexa Grabell
Co-Founder & CEO at Pocus
Keep Reading
The top takeaways from Season 3 of the 10x GTM Podcast

We're sharing proven GTM strategies from some of the smartest leaders in the industry.

Meredith McManus
December 10, 2024
Strategic selling at scale with Pocus AI Strategy

How GTM professionals from Canva, Monday.com, Sourcegraph, and more are using Pocus AI Strategy to hit their revenue goals

Meredith McManus
November 19, 2024
Introducing Pocus AI

Unlock AI-driven sales insights for faster, smarter selling.

Alexa Grabell
November 12, 2024
Getting started with AI in sales

The second part of our series exploring where AI is a valuable GTM tool, including ready-to-use prompts.

Alexa Grabell
November 5, 2024
Join the 10x GTM movement

The knowledge you need to win the next era of GTM with data and AI.

Sandy Mangat
October 15, 2024
AI for GTM: What works now

The first in a two-part series exploring where AI is a valuable GTM tool versus a toy.

Alexa Grabell
September 17, 2024

See the magic for yourself

Watch how Pocus makes it easy to
drive conversion, retention, and optimization.

Book a Demo
Join the 10x GTM community
Apply for next cohort
Want your reps closing 5x more revenue?
Book a Demo
Request a 14 day free trial
Get Started
Experience the magic for yourself
Take product tour