Daniel Saks
Chief Executive Officer
Every enterprise outbound team faces the same problem at scale. The database returns hundreds of contacts per target account. The SDR needs to reach the person who actually makes the purchasing decision. Title filters are the default solution, and they fail in predictable ways.
According to Gartner research on B2B buying behavior, the average enterprise purchase involves six to ten decision-makers. Each stakeholder evaluates the purchase through a different lens. Reaching the right ones is the difference between a deal that progresses and a deal that stalls in committee. According to Harvard Business Review research on enterprise selling, deals with three or more engaged stakeholders close at significantly higher rates. The challenge is identifying those stakeholders across thousands of accounts without manual research per company.
A 'Director of Operations' at a 5,000-person company manages facility logistics. A 'Director of Operations' at a 200-person company makes technology purchasing decisions. The title is identical. The purchasing authority is completely different. According to Forrester research on B2B revenue operations, misrouted contacts are one of the top three causes of pipeline leakage in enterprise sales.
B2B contact data decays at 20-30% annually according to research on CRM data hygiene. At enterprise scale, that means one in five contacts in your database has changed roles, left the company, or retired since the data was last updated. Title filters do not distinguish between a current VP of Sales and someone who held that title eight months ago. The rep discovers this on the call, after spending time preparing and dialing.
At trades companies, the buyer is the owner or general manager. At healthcare companies, the buyer might be the Chief Medical Information Officer. At financial services firms, the buyer could be a Managing Director with no standard functional title. These contacts never appear in a filter for 'VP of IT' or 'Head of Procurement' because their titles do not match the filter. They are invisible to title-based queries and visible to AI-powered scoring that evaluates the full profile.
The first dimension evaluates whether the contact's current title matches known buyer roles for your product. This goes beyond exact match to include variants, abbreviations, and industry-specific titles. A strong title match model maintains a library of 100+ title variants organized by buyer tier, updated as new patterns emerge from call outcome data.
Seniority determines purchasing authority. C-level contacts score highest, followed by VP, Director, Senior Manager, and individual contributor. The scoring weights should reflect your actual deal patterns. If Directors close more deals than VPs in your market, the model should reflect that.
This dimension evaluates profile text, headline, skills, and employment history for evidence that the contact actually performs the role the title suggests. A 'VP of Operations' with a headline mentioning 'supply chain optimization' and skills listing 'procurement' and 'vendor management' scores differently from a 'VP of Operations' with a headline mentioning 'people operations' and skills listing 'employee engagement.' According to McKinsey research on B2B digital selling, the precision of contact targeting is one of the strongest predictors of outbound conversion rates.
Contacts located near the company's headquarters are more likely to hold company-wide authority than contacts in branch offices or regional roles. A VP of Sales at corporate headquarters has different purchasing authority than a Regional VP of Sales in a field office. Location scoring adds a proximity signal that improves targeting precision.
Removing the wrong contacts is as valuable as finding the right ones. Every excluded contact that would have been a wasted dial saves the rep five to ten minutes of preparation and calling time. At scale, exclusion rules have a measurable impact on SDR productivity. Common exclusion criteria include: retired executives still listed in databases, contacts in HR, legal, or finance roles with no purchasing path, branch-level administrators, seasonal or contract workers, and contacts in geographic regions outside the target market.
Pull the contact record for every closed-won deal in the last 12 months. What titles did these contacts hold? What seniority level? What did their profiles say? These patterns become the baseline for the scoring model. If 60% of your closed-won contacts were Directors and only 15% were VPs, the model should weight Directors higher.
Assign point values to each dimension based on its predictive power. A typical starting rubric might allocate 40 points to title match, 20 points to seniority, 25 points to role evidence, 10 points to location, and 5 points to bonus signals. The weights should be calibrated against your actual conversion data and adjusted quarterly.
List every contact type that wastes rep time. Be specific. Do not just exclude 'HR' - specify which HR roles (benefits coordinators, talent acquisition specialists, HR generalists) and which to keep (Chief People Officers, HR Directors at companies where HR owns the technology budget). According to Salesforce research on sales performance, the highest performing SDR teams spend 65% or more of their time on active selling rather than research and verification.
Run the rubric against a sample of 50-100 accounts. Compare the AI-scored contact list against what your reps would have pulled from a database using title filters. Measure the overlap and the delta. The contacts in the AI list but missing from the title-filter list are the uplift. The contacts in the title-filter list but excluded by AI are the wasted dials you are preventing.
Landbase applies a multi-dimensional contact scoring rubric to every contact at every target account. Contacts are classified into buyer tiers, exclusion rules are applied automatically, and the output is a clean CSV with scored, qualified contacts ready for CRM import. The scoring model can be calibrated against your closed-won data and refined with each outreach cycle. For more on how this fits into the broader outbound workflow, see the guide on scaling outbound at 50+ SDRs.
Lead scoring evaluates inbound leads based on engagement behavior (page visits, downloads, form fills). Contact scoring evaluates outbound contacts based on their role, seniority, and purchasing authority at a target account. Lead scoring answers 'how interested is this person.' Contact scoring answers 'is this the right person to call.' For a deeper comparison, see account scoring vs. lead scoring.
For mid-market accounts, three to five contacts per company is sufficient. For enterprise accounts with complex buying committees, five to eight contacts across different buyer roles provides the multi-threading coverage that improves close rates. The scoring model should surface the right number of contacts per account tier automatically.
You can build the logic, but the data access is the constraint. Evaluating role evidence requires parsing profile text, headline, skills, and employment history across every contact at every target account. At 1,000+ accounts with 10+ contacts each, that is 10,000+ profiles to evaluate. AI-powered scoring handles this in hours. Manual review would take weeks.
After every major outreach cycle. Call outcomes reveal which contact types actually convert. If Directors are converting at 3x the rate of VPs in a specific vertical, the model should increase the weight on Director-level seniority for that segment. Quarterly recalibration is the minimum. Cycle-level recalibration is ideal.
Tool and strategies modern teams need to help their companies grow.