LinkedIn Hiring Lead Enricher — emails verified + buying committee inferred
Takes the partial hiring contacts a scraper produces and turns them into a sequenceable pipeline: verified emails for every lead, plus inferred buying committees for the hiring companies where LinkedIn returned no hiring team. Hourly cron. Pair upstream with a hiring-signal scraper, downstream with a sequencer.
Build this with agnt_
Skip the copy-paste. We'll spin up a builder session prepopulated with this blueprint's spec — providers, schedule, database schema, and the questions the agent should ask you to personalize it for your product.
Sign up free · no credit card
The motion
Hiring scrapers produce contacts that are almost-but-not-quite ready for outreach: profile URLs without verified emails, and a non-trivial fraction of job posts where LinkedIn's Get_Hiring_Team returns empty. This agent closes both gaps in one hourly run. Job A drains pending_enrichment leads through a multi-provider email-finder waterfall (Hunter → Prospeo → LeadMagic via the agntdata orchestrator), then verifies each email and tags the row as verified / risky / invalid. Job B handles orphaned hiring companies: for every job_post stuck in 'no_hiring_team_found', it runs people_search with seniority + department filters, classifies each result by title (decision_maker / function_owner / recruiter / other), and inserts qualifying stakeholders as new pending_enrichment leads — which Job A picks up on the next run. The buying committee is reconstructed from the ground up. Output is two state changes: lead rows go from pending_enrichment to verified; orphaned job_posts move to hiring_team_inferred. Pair with `linkedin-hiring-signal-scraper` upstream and `gtm-email-sequencer` downstream for the full motion.
people_email_finder waterfalls Hunter → Prospeo → LeadMagic on a single call. people_email_verifier gates on `deliverable`. You pay one credit per lead instead of stacking 3 vendor subscriptions.
When a job post has no hiring team, people_search finds C-suite, VPs, directors, and founders in sales / marketing / BD at the company. Each is classified by title — only decision_maker and function_owner roles get inserted.
Up to 50 leads in Job A + 10 job_posts in Job B per run. Atomic claim (UPDATE to in_progress) means concurrent runs never double-process. Predictable cost per hour.
This agent writes to the workspace DB only. The downstream sequencer (e.g. gtm-email-sequencer) picks up verified rows. Separation of concerns: enrichment and activation are different agents.
The bridge between a scraper and a sequencer. Scrapers produce profile URLs without emails, and a chunk of job posts where the hiring team is hidden. This agent finishes both: a multi-vendor email waterfall on the pending leads, plus a people_search-driven inference of buying committees for orphaned jobs. By the time the data lands in the sequencer, every lead has a verified email and every hiring company has at least one decision_maker or function_owner contact.
Build this with agnt_
Skip the copy-paste. We'll spin up a builder session prepopulated with this blueprint's spec — providers, schedule, database schema, and the questions the agent should ask you to personalize it for your product.
Sign up free · no credit card
Or copy a prompt into another platform
Prefer to build with OpenClaw, Hermes, or Claude Code? Drop this prompt into your agent of choice — it seeds the goal, the agntdata endpoints to use, and a step-by-step plan.
You are helping me build a LinkedIn Hiring Lead Enricher agent. This is the enrichment layer that sits between a hiring-signal scraper (which produces partial contacts) and an outbound sequencer. It does two jobs every run:
JOB A — Enrich existing leads. Take rows from `hiring_leads` with `enrichment_status='pending_enrichment'`, fetch their full LinkedIn profile, find their email via the agntdata multi-provider waterfall, verify the email, and update the row.
JOB B — Multi-stakeholder discovery. For job posts where `Get_Hiring_Team` returned nothing (`job_posts.status='no_hiring_team_found'`), fall back to a people-search by company + role-leadership filters. Insert every qualifying stakeholder into `hiring_leads` as new pending_enrichment rows.
The agent runs hourly and processes up to 50 leads in Job A + 10 job posts in Job B per run.
REFERENCE DOCS (read these before writing code)
- Full agntdata API documentation: https://agnt.mintlify.app/apis/overview
- agntdata orchestrator endpoints (used here):
- `people_email_finder` — multi-provider waterfall (Hunter → Prospeo → LeadMagic etc.); one call, one credit charge.
- `people_email_verifier` — returns `deliverable` / `risky` / `undeliverable`.
- `people_search` — company + filters → stakeholders. Better-curated than calling LinkedIn directly.
- `people_enrich`, `people_bulk_enrich` — single-call enrichment for cases where you have a linkedin_url and want everything (profile + email + verify) in one shot.
- LinkedIn endpoints used as fallbacks: `Get_Profile_Data_By_URL`, `Search_People`.
ABOUT MY MOTION
- Product name: <YOUR PRODUCT>
- One-line description: <WHAT IT DOES>
- Who I sell to: <BUYER ROLE FAMILY — same as the upstream scraper>
- Upstream scraper: <linkedin-hiring-signal-scraper or your own equivalent — produces `hiring_leads` + `job_posts` tables>
ROLE CLASSIFICATION
Job B classifies discovered stakeholders into 4 buckets and inserts only two of them. Define your title→role mapping:
- `decision_maker` — final-call buyer. Default titles: Founder, Co-founder, CEO, President, COO, CRO.
- `function_owner` — functional owner of the area you sell into. Default titles for selling to GTM: VP Sales, Head of Sales, Director of Sales, VP Revenue, CMO, etc. Customize for your buyer.
- `recruiter` — Talent / Recruiting / HR. Inserted only if you sell to recruiters; otherwise SKIPPED.
- `other` — anything else. SKIPPED.
WHAT TO BUILD
- A scheduled agent on agntdata that runs hourly (claude-sonnet-4-6 — role classification benefits from the smarter model).
- Job A: claim pending_enrichment leads atomically (UPDATE to 'in_progress' first), then waterfall through profile fetch → email finder → email verifier. Update the row's enrichment_status to 'verified' / 'risky' / 'invalid' / 'email_not_found'.
- Job B: scan `job_posts.status='no_hiring_team_found'`, call people_search per company with seniority + department filters, classify each result, dedupe against `hiring_leads.linkedin_profile_url`, insert qualifying rows as new pending_enrichment leads. Update the job_post status to 'hiring_team_inferred' on success.
- DO NOT push to Instantly or any external CRM. That is the downstream sequencer's job.
DATABASE (must already exist — owned by the upstream scraper)
- `hiring_leads` — has at minimum: id, linkedin_profile_url, full_name, company_name, title, job_post_id, raw_hiring_team_data, enrichment_status. This blueprint adds: first_name, last_name, company_domain, email, email_verified, email_status, stakeholder_role.
- `job_posts` — read `company_name` and `status`; update `status` to 'hiring_team_inferred' when Job B succeeds.
DELIVERY
- Compact JSON summary at the end of each run with Job A + Job B counts (see WORKFLOW below for the schema).
GUARDRAILS
- Max 50 leads per Job A run, max 10 job posts per Job B run — keep runs bounded.
- Never re-process where `enrichment_status != 'pending_enrichment'`. The atomic UPDATE to 'in_progress' is the lock.
- Wrap each lead in try/catch — on error, set `enrichment_status='error'` and continue.
- Multiple contacts per company is expected and correct. Do not skip a company because another lead from there exists.
- Never push to Instantly here. The sequencer handles activation.
When you're ready, start by asking me the ABOUT MY MOTION and ROLE CLASSIFICATION blocks.Paste into OpenClaw to scaffold this agent. Tweak the inputs and goal at the top of the prompt.
How to build it
8 steps. Each one links to the underlying agntdata endpoints — open them in a new tab to inspect parameters and pricing as you build.
One key gives you the agntdata orchestrator endpoints (people_email_finder, people_email_verifier, people_search) and the LinkedIn fallbacks. Credit-based pricing — one credit per successful enrichment.
This blueprint assumes a scraper is populating hiring_leads with rows where enrichment_status='pending_enrichment' and job_posts with status='no_hiring_team_found' for orphans. The `linkedin-hiring-signal-scraper` blueprint does both; if you have your own scraper, point this enricher at its tables instead.
Decide which titles count as decision_maker and which as function_owner. Default decision_maker: Founder/CEO/President/COO/CRO. Default function_owner (selling to GTM teams): VP Sales, Head of Sales, Director of Sales, VP Revenue, CMO, etc. Customize for your buyer — selling to data teams? function_owner becomes VP Data / Head of Analytics / etc.
Default behavior is to skip recruiter results. If you sell to recruiters specifically (talent tools, ATSes, etc.), flip this so Job B inserts recruiter rows too. Only matters in Job B classification.
Click 'Build with agnt_' to scaffold the agent with the 7 data tools, the workspace_db allowlist for hiring_leads + job_posts, and the two-job workflow in the system prompt. The meta-agent asks you the personalization questions — product, buyer, taxonomy — then deploys.
Default cron: `0 * * * *` (hourly). Dry-run on a single pending_enrichment lead — verify the profile fetch + email finder + verifier waterfall lands the row at 'verified'. Then dry-run Job B on a single no_hiring_team_found post — verify at least one decision_maker or function_owner gets inserted.
Once leads are landing at enrichment_status='verified', point your sequencer at them. The `gtm-email-sequencer` blueprint reads verified rows from hiring_leads, generates personalized variants by role + company size, and pushes to Instantly. Together: scraper → enricher → sequencer is the full motion.
Pull 20 random Job B inserts and read their `title`. Are they actually your buyer? If you see too many false positives (e.g. CMOs at agencies, not in-house), tighten the title list. If you see too many qualifying contacts ending up as 'other', loosen it.
Endpoints used
The agntdata endpoints this blueprint depends on. All available with one API key.
Find a person's professional email
/people/email-finder
Multi-vendor email-finder waterfall (Hunter → Prospeo → LeadMagic). One call, one credit charge. Caps at max_cost_cents=10 per lead.
View endpoint docsVerify a professional email
/people/email-verifier
Returns deliverable / risky / unknown / undeliverable. Maps to enrichment_status: 'verified' / 'risky' / 'invalid'. Only 'verified' leads should reach the sequencer.
View endpoint docsSearch people
/people/search
Job B core: given a company name + seniority + department filters, returns up to 10 candidate stakeholders. Better-curated than calling LinkedIn Search_People directly.
View endpoint docsEnrich a person
/people/enrich
Single-call enrichment combining profile fetch + email finder + verifier for cases where you have a linkedin_url and want everything in one shot. Available as a tool but not required.
View endpoint docsGet Profile Data By URL
/get-profile-data-by-url
Used in Job A to backfill first_name, last_name, and company_domain on hiring_leads rows that came in with only a linkedin_profile_url.
View endpoint docsSearch People
/search-people
Job B fallback when people_search returns 0 results. Same keywordTitle + company filtering pattern.
View endpoint docsShip this blueprint today
One click spins up a builder session prefilled with this blueprint's spec. We'll ask you a handful of personalization questions, then generate the agent.
Related blueprints
Browse all →Hand any X username to this agent and get back a qualified, ICP-scored lead with a verified email and a resolved LinkedIn profile.
Hand any LinkedIn profile URL to this agent and get back a qualified, ICP-scored lead with a verified email and a website summary attached.
Hiring for a role is the loudest buying signal LinkedIn gives away for free. This agent watches it daily — captures every company posting jobs in your buyer's role family, plus the hiring team behind each post — and writes them to a deduped pipeline of accounts + decision-makers ready for outreach.