intermediate30 minLinkedInLinkedIn

LinkedIn Hiring-Signal Scraper — companies investing in your buyer’s role

Hiring for a role is the loudest buying signal LinkedIn gives away for free. This agent watches it daily — captures every company posting jobs in your buyer's role family, plus the hiring team behind each post — and writes them to a deduped pipeline of accounts + decision-makers ready for outreach.

One-click build

Build this with agnt_

Skip the copy-paste. We'll spin up a builder session prepopulated with this blueprint's spec — providers, schedule, database schema, and the questions the agent should ask you to personalize it for your product.

Build with agnt_

Sign up free · no credit card

The motion

When a company posts a job for the role you sell into, three things become true at once: they have budget allocated to that function, they have an active need that role is meant to solve, and the hiring manager is the person who decides what tools that role uses. This agent harvests that signal at scale. It rotates through keyword × geo batches several times a day, captures every new job post in your target role family, deduplicates against what's already been scraped, and — for each new post — pulls the hiring team. The output is two tables: companies that just started investing, and the names of the decision-makers on each. Pair with the `linkedin-hiring-lead-enricher` blueprint to verify emails and score by ICP fit, and `gtm-email-sequencer` to push the top band into Instantly. Works for any vendor selling to a specific role — GTM Engineers, Data Engineers, CS Managers, Founding AEs, anyone with a hiring footprint on LinkedIn.

Companies with budget allocated, right now

A job posting is proof of allocated budget for the function. Every row in `job_posts` is an account that just signed off on spending money on the role you sell into.

The decision-maker comes with the signal

Get_Hiring_Team returns the people attached to each posting — usually the hiring manager + 1–2 collaborators. Those are the buyers, not a list of every random employee at the company.

Coverage rotation, no double-scraping

Keyword × geo batches rotate across runs, so the agent never hits the same posts twice. Indices are persisted in `agent_config` BEFORE searches run — crash-safe.

Hands-off pipeline feeder

Writes to DB only. Pair with a downstream enricher and sequencer for the full motion; the scraper is the data layer that feeds the rest.

Hiring for a role is the loudest free buying signal LinkedIn gives you — budget is allocated, the function matters, and the hiring manager is the buyer. Most teams ignore this signal because checking LinkedIn jobs daily across 4 metros and 3 keyword variants is brutal manual work. This agent automates it: rotation-batch search, dedupe, title-filter, hiring-team pull. Two tables on a daily cron.

Click any node to inspect
Hiring signals harvested 2–4x daily across rotating keyword × geo batches.
One-click build

Build this with agnt_

Skip the copy-paste. We'll spin up a builder session prepopulated with this blueprint's spec — providers, schedule, database schema, and the questions the agent should ask you to personalize it for your product.

Build with agnt_

Sign up free · no credit card

Or copy a prompt into another platform

Prefer to build with OpenClaw, Hermes, or Claude Code? Drop this prompt into your agent of choice — it seeds the goal, the agntdata endpoints to use, and a step-by-step plan.

Prefer the manual walkthrough? ↓
You are helping me build a LinkedIn hiring-signal scraper. The agent runs a few times a day, searches LinkedIn for job postings in a specific role family that signal my buyer is investing, captures the company and job details, and pulls the hiring team for each post — every name on the hiring team is a potential decision-maker for the tools I sell into that function.

This blueprint is general — works for any vendor that sells to a specific role. Replace <TARGET_ROLE_FAMILY> with whatever role family your ICP hires for (GTM Engineer, Data Engineer, Customer Success Manager, Founding AE, etc.).

REFERENCE DOCS (read these before writing code)
- Full agntdata API documentation: https://agnt.mintlify.app/apis/overview
- LinkedIn endpoints used (all behind one agntdata key):
  - `Search_Jobs_V2` — keyword + location job search with pagination
  - `Get_Job_Details` — full job posting payload by job id
  - `Get_Hiring_Team` — decision-makers attached to a specific job post
  - `Search_People` — fallback for posts where Get_Hiring_Team returns empty

ABOUT MY MOTION
- Product name: <YOUR PRODUCT>
- One-line description: <WHAT IT DOES>
- Who I sell to: <BUYER ROLE FAMILY — e.g. "GTM/RevOps leaders at B2B SaaS companies">
- The signal: <TARGET_ROLE_FAMILY> — the role family whose hiring indicates a company is investing in the function you sell into

KEYWORD ROTATION
Give me 3–5 batches of 3 keywords each (15 total). The agent rotates through batches across runs so it doesn't keep hitting the same posts. Example for selling to GTM teams:
- Batch 0: "GTM Engineer", "Go-To-Market Engineer", "Revenue Engineer"
- Batch 1: "GTM Lead", "Growth Engineer", "Pipeline Engineer"
- Batch 2: "Sales Engineer GTM", "Marketing Engineer", "Demand Generation Engineer"

For each batch, I'll generalize the keywords to my own buyer's role family.

GEO COVERAGE
The blueprint ships with 6 geo batches covering 24 LinkedIn locationIds (US metros, EMEA, APAC, remote). I can shrink or extend this — for a US-only motion, 2 batches × 4 metros is plenty.

WHAT TO BUILD
- A scheduled agent on agntdata that runs 2–4x daily (claude-sonnet-4-6 — judgment on title-filtering benefits from the smarter model).
- The agent rotates through keyword × geo batches using indices stored in `agent_config`. Every run hits a fresh combination; the same job is never re-scraped.
- Per run: search → dedupe against the database → fetch job details → fetch hiring team → write everything to two tables.
- Pair with a downstream lead-enrichment agent (see the `linkedin-hiring-lead-enricher` blueprint).

DATABASE (the blueprint creates these on first run)
- `job_posts` — one row per LinkedIn job post we've scraped. PK on `linkedin_job_id`. Stores company, title, location, snippet, raw payload, and a `status` lifecycle: `pending_hiring_team → hiring_team_fetched | no_hiring_team_found`.
- `hiring_leads` — one row per decision-maker found on a hiring team. Links to `job_posts` via `job_post_id`. Stores `linkedin_profile_url`, `full_name`, `title`, `company_name`. Status `pending_enrichment` so the downstream enricher knows to pick it up.
- `agent_config` — single-row config table. Holds `scraper_keyword_batch_idx`, `scraper_geo_batch_idx`, and `scraper_last_run_searches` (the rotation state).

DELIVERY
- After each run, the agent prints a JSON summary: batches used, raw results, after-dedup, after-title-filter, new jobs inserted, hiring leads added, no_hiring_team count, next batch indices.
- DO NOT push leads to any external CRM or sequencer — that is handled by downstream agents.

GUARDRAILS
- Never re-insert a job or lead that already exists in the DB (PK on `linkedin_job_id` + UPSERT on `linkedin_profile_url`).
- Title filter: after Search_Jobs_V2, only keep jobs whose title contains at least one of your target keywords (case-insensitive). LinkedIn's keyword search is loose; this gate trims the noise.
- Pace API calls — work through searches sequentially, not simultaneously.
- If a single search or page errors, log it and continue. Don't abort the whole run.
- Cap `description_snippet` to first 500 chars to keep the DB compact.
- Persist the NEXT batch indices BEFORE running searches (Step 2 below). If the run crashes mid-way, the next run still advances correctly.

When you're ready, start by asking me the ABOUT MY MOTION questions and the KEYWORD ROTATION block.

Paste into OpenClaw to scaffold this agent. Tweak the inputs and goal at the top of the prompt.

How to build it

9 steps. Each one links to the underlying agntdata endpoints — open them in a new tab to inspect parameters and pricing as you build.

One key gives you LinkedIn search, job details, hiring team, and Search_People — plus credit-based pricing with no monthly minimum.

The role whose hiring indicates a buying signal for your product. Vendors selling to GTM teams pick "GTM Engineer / Revenue Engineer / Growth Engineer". Vendors selling to data teams pick "Data Engineer / Analytics Engineer". The agent's coverage = whatever you list here.

Each batch is 3 keyword variants the same role goes by. The agent rotates through batches across runs so it doesn't keep hitting the same posts. Default ships with 5 batches covering 15 keywords; trim to your role family.

Each batch is 4 LinkedIn `locationId` values. Default ships with 6 batches covering 24 metros (US, EMEA, APAC, remote). For a US-only motion, 2 batches × 4 metros is fine. The blueprint includes the full default geo list.

The blueprint creates `job_posts`, `hiring_leads`, and `agent_config` in your workspace database. Partial unique index on `hiring_leads.linkedin_profile_url WHERE NOT NULL` so the same contact is never inserted twice across different jobs at the same company. Seed `agent_config` with starting indices (0, 0, []).

Click "Build with agnt_" to spin up a builder session prefilled with the schedule, workspace_db allowlist, the four LinkedIn data tools, and the rotation logic in the system prompt. The meta-agent asks you the personalization questions — role family, keyword batches, geos — then deploys.

Some job posts return empty from Get_Hiring_Team. The blueprint ships with an optional TypeScript skill that retries up to 2 times, and falls back to Search_People filtered by company slug + role-leadership titles. Process up to 30 stuck posts per retry run. Schedule it as a one-off when your backlog gets noisy.

Default: 4 runs per day at 06:00 / 11:00 / 16:00 / 21:00 in your timezone. Do a dry run on a single keyword batch first — verify ~20 jobs come back, ~5–10 hiring leads per job. After 24 hours you'll have a full first-day backlog.

This blueprint is the scraper layer. Pair with `linkedin-hiring-lead-enricher` to verify emails and score by ICP fit, then `gtm-email-sequencer` to push the top band into Instantly. The three together form a complete hiring-signal → personalized outbound pipeline.

Endpoints used

The agntdata endpoints this blueprint depends on. All available with one API key.

LinkedInLinkedInget

Search Jobs V2

/search-jobs-v2

The core search call. Filtered by keyword + locationId + datePosted=pastWeek + sort=mostRecent. Paginated 4 pages × 25 results.

View endpoint docs
LinkedInLinkedInget

Get Job Details

/get-job-details

One call per new job_id to extract title, company, location, employment type, and the first 500 chars of the description.

View endpoint docs
LinkedInLinkedInget

Get Hiring Team

/get-hiring-team

The decision-maker list attached to each posting. Usually the hiring manager + 1–2 collaborators — your actual buyer.

View endpoint docs
LinkedInLinkedInget

Search People

/search-people

Fallback used by the retry skill when Get_Hiring_Team returns empty. Filtered by company slug + role-leadership titles.

View endpoint docs

Ship this blueprint today

One click spins up a builder session prefilled with this blueprint's spec. We'll ask you a handful of personalization questions, then generate the agent.

Related blueprints

Browse all →
LinkedInintermediate20 min

Turn the people quietly liking and commenting on LinkedIn posts about your space into a deduped pipeline of warm leads — refreshed every day, no manual scrolling.

Signal DetectionCold OutboundICP DiscoveryFounderGrowth Marketer
LinkedInX (Twitter)agntdata Lead APIsintermediate25 min

Every day, find creators posting about your space on LinkedIn + X — filter for 1k+ followers and topic-relevance, enrich with verified emails, save to a deduped partnerships table. Pair with the creator outreach writer to actually pitch them.

ICP DiscoveryCold OutboundSignal DetectionFounderGrowth Marketer
Redditintermediate15 min

Monitor 15+ subreddits twice a day for prospects describing the exact pain your product solves. AI-scored, deduplicated, and pushed to Slack — for around $4 a month.

Signal DetectionCold OutboundICP DiscoveryFounderGrowth Marketer