How AI Candidate Matching Actually Works: A Two-Layer Approach
You’ve seen the pitch a hundred times: “AI-powered candidate matching.” But what does that actually mean? Most tools give you a percentage — “78% match” — and expect you to trust it. No evidence. No explanation. No way to know if the AI is brilliant or hallucinating.
That’s not matching. That’s a magic number.
Real candidate matching needs two things: speed and explainability. You need to evaluate 20+ candidates per day without spending 10 minutes on each one — but you also need to understand why a candidate scored the way they did, so you can make informed decisions and have meaningful screening conversations.
Here’s how a two-layer approach solves both problems.
The Problem with Single-Layer Matching
Most matching systems use one of two approaches, and both have fatal flaws.
Keyword Matching Only
The simplest approach: extract keywords from the job description, check if they appear in the candidate’s profile. “React” in the JD? Search for “React” in the CV. Found it? Point scored.
This breaks down fast:
- Synonyms are invisible. A candidate who lists “React.js” won’t match a search for “ReactJS.” Someone with “frontend development” won’t match “front-end engineering.”
- Context is lost. “5 years of Python” in a candidate’s profile might mean 5 years of scripting small automation tools — not 5 years of building production backend systems.
- Seniority is a guess. Keywords can’t tell you if someone’s career trajectory is heading in the right direction. A Senior Engineer at a 5-person startup and a Senior Engineer at Google have the same title but very different experience.
- Transferable skills don’t exist. A Scala data engineer with 6 years of distributed systems experience won’t match a “Python data engineer” search — even though they could likely do the job.
Keyword matching catches the obvious. It misses everything interesting.
AI-Only Matching
The opposite approach: send the entire candidate profile and job description to an LLM, ask “how well does this person fit?”, and parse the response.
This is better at nuance — AI understands that “Series A company” implies “startup” — but it introduces new problems:
- Cost adds up. If you’re sending full profiles to a large language model for every candidate, you’re spending $0.05-0.15 per evaluation. At 20 candidates per day, that’s $1-3/day just for matching.
- Latency is noticeable. A full AI analysis takes 5-15 seconds. When you’re reviewing a pipeline of 30 candidates, those seconds compound.
- Overkill for obvious mismatches. If a JD requires 8+ years of experience and the candidate has 2 years, you don’t need AI to tell you it’s not a fit. You need a simple number comparison.
- Inconsistency. Ask the same LLM the same question twice, and you might get different scores. Temperature, prompt variations, and model updates all introduce variance.
AI matching is powerful but expensive and sometimes unreliable for basic checks.
The Two-Layer Solution
The answer isn’t choosing between keywords and AI. It’s using both — in the right order.
Layer 1: Deterministic Matching (Free, Instant)
The first layer handles everything that can be evaluated with rules and comparisons. No AI involved. Zero cost. Instant results.
Here’s what Layer 1 checks:
Keyword overlap. Extract skills and technologies from the JD requirements. Check each one against the candidate’s parsed CV data. Calculate a percentage overlap. This isn’t semantic — it’s literal string matching with some normalization (React = React.js = ReactJS).
Years of experience. The JD says “5+ years of backend development.” The candidate’s CV shows they started their first backend role in 2021. Simple math: 5 years. Check.
Location fit. The role is “Berlin, hybrid.” The candidate is in “Munich, open to relocation.” Layer 1 flags this as a partial match — same country, different city, relocation mentioned.
Salary alignment. The budget is €70-85k. The candidate’s expectation (from their profile or previous notes) is €80k. Green flag — within range.
Must-have requirements. The JD has 3 must-haves: React, TypeScript, and 3+ years. Layer 1 checks each one independently and gives a binary pass/fail.
The result is a baseline assessment that catches obvious mismatches instantly. If a candidate fails 3 out of 5 must-haves in Layer 1, you probably don’t need Layer 2 to tell you it’s a weak match. But Layer 1 also knows its limits — it marks anything it can’t evaluate as “needs AI review.”
Layer 2: AI Nuance Analysis (Fast, Contextual)
Layer 2 takes over where rules end and judgment begins. This is where the language model earns its keep.
Career trajectory analysis. Is the candidate growing in the right direction? Someone who went from Junior Frontend → Mid Frontend → Senior Frontend → Lead Frontend is on a clear IC track. Someone who went from Developer → Team Lead → Engineering Manager is on a management track. If the JD is for a Staff Engineer, the AI recognizes which trajectory is a better fit.
Domain relevance. The JD is for a fintech company. The candidate spent 4 years at a payments startup. Layer 1 wouldn’t catch “payments” as relevant to “fintech” — but Layer 2 understands that payments is a fintech subdomain and scores accordingly.
Transferable skills. A candidate with 6 years of Scala and Apache Spark experience applies for a “Python + PySpark” role. Layer 1 sees zero keyword overlap for the language requirement. Layer 2 recognizes that Scala → Python is a common and relatively easy transition for data engineers, and that Apache Spark experience transfers directly regardless of the language wrapper.
Gap identification with screening questions. This is where Layer 2 really shines. Instead of just flagging “missing: Kubernetes experience,” it generates a clarifying question: “The candidate’s CV doesn’t mention Kubernetes, but they have 3 years of Docker and AWS ECS experience. Ask: Have you worked with Kubernetes in any capacity, even for personal projects or certifications?”
The AI doesn’t just score. It tells you what to ask next.
What the Scorecard Looks Like
When both layers finish — typically in under 5 seconds — you get a structured scorecard:
Overall Assessment
| Field | Value |
|---|---|
| Score | 82 / 100 |
| Verdict | Strong Match |
| Recommendation | Screen this candidate |
Requirement Breakdown
| Requirement | Status | Source | Evidence |
|---|---|---|---|
| React, 3+ years | ✅ Green | Layer 1 | ”React” found in 3 roles, earliest 2022 (4 years) |
| TypeScript | ✅ Green | Layer 1 | Listed in skills, used in 2 recent roles |
| Node.js backend | 🟡 Yellow | Layer 2 | No Node.js listed, but has Express.js in one role — likely has Node experience |
| Fintech domain | ✅ Green | Layer 2 | 4 years at PayTech (payments startup) — strong fintech relevance |
| Team leadership | 🟡 Yellow | Layer 2 | Led 2-person team on a project, but no formal lead role. Ask about leadership aspirations |
| Kubernetes | 🔴 Red | Layer 1 | Not found in CV. Docker + ECS experience present — may have transferable knowledge |
Flags
Green flags:
- Strong frontend trajectory (Junior → Mid → Senior in 5 years)
- Fintech domain experience aligns with role
- Salary expectation (€78k) within budget (€70-85k)
Yellow flags:
- Node.js backend experience unclear — needs screening
- Leadership experience is informal — clarify expectations
Red flags:
- No Kubernetes mentioned — if this is a hard requirement, may be a blocker
Screening Questions
- “Your CV shows extensive Docker and AWS ECS experience. Have you had any exposure to Kubernetes — at work, in side projects, or through certifications?”
- “I see you led a 2-person team on the payments migration project. Is moving into a more formal technical lead role something you’re interested in?”
- “You have Express.js listed in your Acme Corp role. How much backend Node.js work were you doing day-to-day versus frontend React work?”
Why Neither Layer Works Alone
Here’s a quick comparison to make the case:
| Scenario | Layer 1 Only | Layer 2 Only | Both Layers |
|---|---|---|---|
| Candidate missing a must-have keyword | ✅ Catches it | ✅ Catches it | ✅ Catches it |
| Scala engineer for Python role | ❌ No keyword match → rejected | ✅ Recognizes transferable skills | ✅ L1 flags gap, L2 adds nuance |
| Candidate has 2 years, JD needs 5 | ✅ Simple math | ✅ But slower and more expensive | ✅ L1 catches instantly, no AI needed |
| ”Payments startup” for fintech role | ❌ Keywords don’t match | ✅ Understands domain | ✅ L2 adds domain relevance |
| Cost per evaluation | $0.00 | $0.01-0.02 | $0.01 (L1 is free) |
| Speed | Instant | 3-5 seconds | 3-5 seconds total |
Layer 1 is fast, free, and reliable for structured checks. Layer 2 is nuanced, contextual, and explains its reasoning. Together, they produce a scorecard that’s both trustworthy and actionable.
The Explainability Advantage
Here’s what separates a good matching system from a magic number generator: evidence.
Every score in the two-layer scorecard is backed by a specific piece of evidence from the candidate’s profile. “82/100” doesn’t mean the AI “felt” like it was an 82. It means:
- 5 out of 6 requirements passed Layer 1 checks ✅
- Layer 2 found strong domain relevance (+points)
- Layer 2 identified 2 gaps that need screening (−points, but recoverable)
- Salary and location are both green flags (+points)
When you walk into a screening call, you don’t just know the score. You know exactly what to ask, exactly what to validate, and exactly what the hiring manager cares about.
That’s the difference between AI matching that’s useful and AI matching that’s just a number.
What This Means for Your Workflow
With two-layer matching, your daily candidate review changes dramatically:
- Open your pipeline — see 15 candidates assigned to a role
- Glance at scores — 3 greens (85+), 7 yellows (60-84), 5 reds (below 60)
- Start with greens — open the scorecard, scan the flags, note the screening questions
- Check yellows — the scorecard tells you exactly what’s missing and whether it’s recoverable
- Skip reds with confidence — Layer 1 caught hard requirement mismatches, no guessing involved
What used to take 5-15 minutes per candidate now takes 30 seconds of scorecard review. At 20 candidates per day, that’s 2+ hours saved — not on AI hype, but on a structured, evidence-based workflow.
Want to see two-layer matching in action? Try Inga CRM free — assign a candidate to a job and get your first scorecard in under 5 seconds.
Ready to stop copy-pasting?
Join recruiters who save 3+ hours daily with AI-powered workflow.
Start Free