Introduction
A single WHOIS lookup is straightforward. One domain, one API call, one JSON response. But real-world workflows rarely operate at that scale. A registrar auditing a client portfolio, a security analyst enriching an IOC list, or a brand team sweeping for typosquats all share the same constraint: they need WHOIS data for dozens, hundreds, or thousands of domains — fast and reliably.
Querying them sequentially is the obvious but wrong approach. At one request per call, a list of 1,000 domains would take nearly an hour even on a fast connection. The solution is controlled parallelisation: sending multiple requests concurrently within the rate limit of your API plan, collecting structured JSON results, and handling the edge cases (timeouts, unsupported TLDs, privacy-shielded records) gracefully.
This guide explains how to do exactly that. We cover the principles behind bulk WHOIS via API, the concrete code patterns in Node.js and Python, error handling strategies for production pipelines, and how to match your API plan to your throughput requirements.
What Is Bulk WHOIS Lookup?
Bulk WHOIS lookup means querying registration data — registrar, creation date, expiry, nameservers, contacts — for a list of domain names rather than a single domain. The output is a structured dataset covering every domain in the input list, normalised into a consistent format regardless of the TLD or registrar involved.
The WHOIS protocol itself has no bulk mode. It is a per-query TCP protocol: one connection per domain, one plain-text response, parsed differently by every registry. RDAP (the modern replacement) is also per-query. No major registry offers a native batch endpoint. Developers who try to query raw WHOIS or RDAP servers directly at scale hit hard per-IP rate limits — Verisign caps anonymous .com/.net queries; many ccTLD registries apply even stricter throttles — and quickly end up IP-banned.
A managed WHOIS API solves this. It pools requests across a distributed infrastructure, handles WHOIS/RDAP routing per TLD automatically, caches responses to avoid redundant upstream queries, and exposes a single authenticated REST endpoint. The client sends one HTTPS request per domain; the API handles everything behind it. Bulk processing becomes a client-side concurrency problem: how many requests to send simultaneously without exceeding the plan's rate limit.
Common Use Cases
The five scenarios below represent the most common reasons developers reach for bulk WHOIS:
Domain Portfolio Management
Registrars and resellers maintain portfolios of hundreds to tens of thousands of domains. Bulk WHOIS is used to audit expiry dates, detect nameserver changes, verify registrar accuracy, and surface domains approaching renewal deadlines — typically as a nightly or weekly batch job.
Cybersecurity & Threat Intelligence
IOC enrichment pipelines receive domain indicators from threat feeds, SIEM alerts, or email gateway logs. Each IOC must be enriched with registration age, registrar, and nameserver data to calculate risk. Bulk WHOIS enables processing thousands of indicators per day within a single API plan.
Brand Protection
Brand monitoring teams generate lists of typosquatting candidates (character substitution, hyphenation, IDN homoglyphs) across multiple TLDs and check which are already registered. A sweep of 500 permutations across 10 TLDs is 5,000 WHOIS queries — a job for bulk, not interactive, lookup.
Expired Domain Hunting
SEO practitioners and domain investors monitor curated lists for domains entering the expiry lifecycle. Bulk WHOIS lets them check expiration.daysLeft and EPP status codes ( pendingDelete, redemptionPeriod) across hundreds of targets simultaneously.
Compliance & Due Diligence
Legal and compliance teams vet third-party vendor domains, affiliate networks, or acquisition targets by cross-checking registration data against disclosed information. Bulk lookup accelerates the data-gathering phase from days to minutes.
How to Process Domains in Bulk with WhoisJSON
The WhoisJSON API exposes a single per-domain endpoint: GET https://whoisjson.com/api/v1/whois?domain=DOMAIN. There is no dedicated bulk endpoint — and that is by design. A per-domain endpoint with controlled client-side concurrency is more flexible and more resilient than a monolithic batch call: it lets you retry individual failures, process results as they arrive, and tune concurrency to exactly match your plan's rate limit.
The correct pattern is to send N requests concurrently, where N is chosen so that the sustained throughput stays within your plan's requests-per-minute (RPM) ceiling. Every API response also includes a Remaining-Requests header showing your live monthly quota balance, so your code can back off gracefully before hitting the quota wall.
The table below shows estimated processing time for a list of 1,000 domains at each plan's RPM ceiling:
| Plan | Rate Limit (RPM) | Est. time — 1,000 domains | Monthly quota |
|---|---|---|---|
| Basic | 20 RPM | ~50 min | 1,000 |
| Pro | 40 RPM | ~25 min | 30,000 |
| Ultra | 60 RPM | ~17 min | 150,000 |
| Mega | 80 RPM | ~13 min | Unlimited |
| Giga | 200 RPM | ~5 min | Unlimited |
| Tera | 300 RPM | ~3.5 min | Unlimited |
| Atlas | 900 RPM | ~1.5 min | Unlimited |
Code Examples
Node.js — Promise.all with p-limit
The p-limit package provides a concurrency-limited wrapper around any async function. Set the limit to match your plan's RPM and process the domain list with Promise.all. Results are collected as they resolve; failures are caught per-domain and logged separately.
// npm install p-limit
import pLimit from 'p-limit';
const API_KEY = 'YOUR_API_KEY';
const BASE_URL = 'https://whoisjson.com/api/v1/whois';
/**
* Fetch WHOIS data for a single domain.
* Returns { domain, data } on success or { domain, error } on failure.
*/
async function fetchWhois(domain) {
const url = `${BASE_URL}?domain=${encodeURIComponent(domain)}`;
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 15_000);
try {
const res = await fetch(url, {
headers: { Authorization: `TOKEN=${API_KEY}` },
signal: controller.signal,
});
clearTimeout(timeout);
if (res.status === 429) {
// Rate limit hit — back off 60 s and retry once
await new Promise(r => setTimeout(r, 60_000));
return fetchWhois(domain);
}
if (!res.ok) {
return { domain, error: `HTTP ${res.status}` };
}
const data = await res.json();
return { domain, data };
} catch (err) {
clearTimeout(timeout);
return { domain, error: err.message };
}
}
/**
* Bulk WHOIS lookup with controlled concurrency.
* @param {string[]} domains - list of domain names
* @param {number} rpm - your plan's requests-per-minute limit
* @returns {{ results: object[], errors: object[] }}
*/
async function bulkWhois(domains, rpm = 32) {
const limit = pLimit(rpm);
const results = [];
const errors = [];
const tasks = domains.map(domain =>
limit(() => fetchWhois(domain))
);
const settled = await Promise.all(tasks);
for (const item of settled) {
if (item.error) {
errors.push(item);
} else {
results.push({
domain: item.domain,
registrar: item.data.registrar?.name ?? null,
created: item.data.created ?? null,
expires: item.data.expires ?? null,
status: item.data.status ?? [],
source: item.data.source ?? null,
});
}
}
return { results, errors };
}
// --- Usage ---
const domains = [
'github.com',
'cloudflare.com',
'stripe.com',
// ... up to thousands of entries
];
const { results, errors } = await bulkWhois(domains, 32);
console.log(JSON.stringify(results, null, 2));
if (errors.length) {
console.error('Failed domains:', errors);
}
Python — asyncio + aiohttp + Semaphore
Python's asyncio combined with aiohttp is the standard approach for high-concurrency HTTP in Python. An asyncio.Semaphore caps the number of in-flight requests to match your plan's RPM.
# pip install aiohttp
import asyncio
import aiohttp
import json
API_KEY = "YOUR_API_KEY"
BASE_URL = "https://whoisjson.com/api/v1/whois"
HEADERS = {"Authorization": f"TOKEN={API_KEY}"}
async def fetch_whois(
session: aiohttp.ClientSession,
semaphore: asyncio.Semaphore,
domain: str,
) -> dict:
"""
Fetch WHOIS data for a single domain with concurrency control.
Returns a dict with 'domain' and either 'data' or 'error'.
"""
async with semaphore:
try:
async with session.get(
BASE_URL,
params={"domain": domain},
headers=HEADERS,
timeout=aiohttp.ClientTimeout(total=15),
) as resp:
if resp.status == 429:
# Rate limit hit — back off 60 s and retry once
await asyncio.sleep(60)
return await fetch_whois(session, semaphore, domain)
if resp.status != 200:
return {"domain": domain, "error": f"HTTP {resp.status}"}
data = await resp.json()
return {"domain": domain, "data": data}
except asyncio.TimeoutError:
return {"domain": domain, "error": "timeout"}
except Exception as e:
return {"domain": domain, "error": str(e)}
def normalise(item: dict) -> dict:
"""Extract the key fields for the output dataset."""
d = item.get("data", {})
return {
"domain": item["domain"],
"registrar": (d.get("registrar") or {}).get("name"),
"created": d.get("created"),
"expires": d.get("expires"),
"status": d.get("status", []),
"source": d.get("source"),
}
async def bulk_whois(domains: list[str], rpm: int = 32) -> dict:
"""
Bulk WHOIS lookup with controlled concurrency.
:param domains: list of domain names
:param rpm: concurrency limit — set to ~80% of your plan's RPM
:returns: {'results': [...], 'errors': [...]}
"""
semaphore = asyncio.Semaphore(rpm)
results, errors = [], []
async with aiohttp.ClientSession() as session:
tasks = [
fetch_whois(session, semaphore, domain)
for domain in domains
]
settled = await asyncio.gather(*tasks)
for item in settled:
if "error" in item:
errors.append(item)
else:
results.append(normalise(item))
return {"results": results, "errors": errors}
# --- Usage ---
if __name__ == "__main__":
domains = [
"github.com",
"cloudflare.com",
"stripe.com",
# ... up to thousands of entries
]
output = asyncio.run(bulk_whois(domains, rpm=32))
print(json.dumps(output["results"], indent=2))
if output["errors"]:
print("Failed:", output["errors"])
Normalised JSON output
The samples below show the fields extracted by the normalise() helper used in the examples above. This is a minimised subset — the actual API response contains many more fields (nameserver, contacts, age, expiration, nsAnalysis, dnssec, and more). See the full field reference for the complete schema.
normalise() function. Adapt the function to keep any additional fields your pipeline needs. The complete response schema is documented in the API documentation.[
{
"domain": "github.com",
"registrar": "MarkMonitor, Inc.",
"created": "2007-10-09 18:20:50",
"expires": "2026-10-09 18:20:50",
"status": [
"clientDeleteProhibited",
"clientTransferProhibited",
"clientUpdateProhibited"
],
"source": "rdap"
// Full response also includes: nameserver, contacts, age,
// expiration, nsAnalysis, statusAnalysis, dnssec, registrar
// (object), rawdata, ips, whoisserver, parsedContacts …
},
{
"domain": "cloudflare.com",
"registrar": "SafeNames Ltd.",
"created": "2009-02-17 20:43:57",
"expires": "2027-02-17 20:43:57",
"status": ["clientDeleteProhibited"],
"source": "rdap"
}
]
Handling Errors & Edge Cases
Bulk WHOIS processing will encounter failures — that is expected at scale. The key is to handle each domain independently so that one failure does not abort the entire batch.
- Privacy shield and redacted contacts: Many registrants use WHOIS privacy services (Domains By Proxy, Withheld for Privacy). The API returns a valid response with redacted or absent
contactsfields. This is not an error — treat it as a data characteristic and record it. - Unsupported or restricted TLDs: A small number of ccTLD registries operate closed WHOIS/RDAP servers. The API returns an HTTP 400 or an empty
registeredfield. Log these domains in a separateunsupportedlist for manual review. - Timeouts: WHOIS lookups can be slow for certain TLDs (some ccTLD servers have response times of 5–10 seconds). Set a per-request timeout of 15 seconds and mark timed-out domains as retriable.
- Rate limit (HTTP 429): If you exceed your plan's RPM, the API returns a 429. Back off for at least 60 seconds before retrying. In practice, tuning concurrency to 80% of your RPM ceiling prevents 429s in the first place.
- Retry logic: Collect all failed domains (timeouts, 5xx, network errors) in a separate array and run a second pass after the main batch completes. Two passes cover the vast majority of transient failures without complicating the main loop.
Choosing the Right Plan for Bulk Workloads
Selecting a plan for bulk WHOIS comes down to two numbers: your total monthly volume and how fast each individual batch needs to complete.
- Low-volume audits (under 1,000 domains/month): the Basic free plan (20 RPM, 1,000 requests) covers occasional one-off sweeps with no cost.
- Regular batches up to 30,000 domains/month: the Pro plan (40 RPM) processes 1,000 domains in ~25 minutes and provides enough quota for daily runs of moderate-sized lists.
- High-frequency or large-volume workloads: Giga (200 RPM) or Tera (300 RPM) with unlimited quota are the right fit for continuous IOC enrichment pipelines, daily portfolio audits of 100k+ domains, or brand monitoring sweeps across many TLD permutations.
- Maximum throughput: Atlas (900 RPM, unlimited quota) is designed for infrastructure-scale use — 1,000 domains in under 2 minutes, with headroom for concurrent workloads from multiple services.
See the full plan comparison on the pricing page. Free accounts (1,000 requests/month, no credit card) are activated instantly on registration.
Conclusion
Bulk WHOIS is a client-side parallelisation problem. No registry provides a native batch endpoint — the right approach is to query the WhoisJSON API with controlled concurrency ( p-limit in Node.js, asyncio.Semaphore in Python), tune your concurrency limit to 80% of your plan's RPM, and handle per-domain failures independently so the batch continues regardless of individual errors.
The result is a normalised JSON dataset — consistent field names, structured dates, pre-computed enrichment fields — ready for ingestion into your pipeline, SIEM, or spreadsheet. Match your plan to your volume and frequency, and bulk WHOIS becomes a routine, reliable building block rather than an integration challenge.
Start for Free
WHOIS API with 1,000 free requests/month — no credit card. All endpoints included.
Get Your Free API Key