Introduction
Keka’s ATS has quickly become a go-to system for fast-growing companies looking to professionalize recruitment operations without the bulk. But when teams start scaling hiring, the real unlock lies in pulling clean, structured application data directly into their internal dashboards, HRIS ecosystems, or analytics pipelines.
This guide walks through how to retrieve job application data from the Keka ATS API, step by step. It builds on our broader deep-dive series on ATS API integration, where we cover authentication, rate limits, data structures, and best practices. If you want the full technical exploration, you’ll find it in our extended guide here.
Prerequisites
Before you begin, make sure you have the essentials in place:
- Access to Keka ATS API documentation
- Valid OAuth authentication credentials
- A Python environment with
requestsinstalled
API Endpoint
Keka exposes a straightforward endpoint for fetching candidate data:
https://{company}.{environment}.com/api/v1/hire/preboarding/candidates
Step-by-Step Workflow
Step 1: Authenticate
Keka uses OAuth for secure access. Ensure your OAuth tokens are generated and active.
Step 2: Fetch All Candidates
import requests
url = "https://company.keka.com/api/v1/hire/preboarding/candidates"
headers = {
"accept": "application/json",
"Authorization": "Bearer YOUR_ACCESS_TOKEN"
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
candidates = response.json()
print(candidates)
else:
print("Error:", response.status_code)Step 3: Fetch a Specific Candidate
candidate_id = "specific_candidate_id"
url = f"https://company.keka.com/api/v1/hire/preboarding/candidates?candidateIds={candidate_id}"
response = requests.get(url, headers=headers)
if response.status_code == 200:
candidate_data = response.json()
print(candidate_data)
else:
print("Error:", response.status_code)Common Pitfalls to Watch Out For
Developers typically hit the same roadblocks. Here’s what to expect and how to avoid inefficiencies:
1. OAuth token mismanagement
Expired or incorrectly scoped tokens trigger 401s and slow your development cycle. Implement auto-refreshing.
2. Rate-limit surprises
Bulk pulls or aggressive sync loops can hit Keka’s limits faster than you think. Build backoff + retries.
3. Pagination gaps
Large hiring cycles mean large datasets. Missing pagination means missing candidates.
4. Inconsistent JSON handling
Keka’s payloads are nested; if you’re flattening data for a BI pipeline, map fields in advance.
5. Environment confusion
Mixing up {company}.{environment} frequently causes 404 errors. Validate environment before every deployment.
6. Security hygiene misses
Access tokens in logs or Git commits = catastrophic. Always vault secrets.
7. Poor error-handling logic
Keka returns meaningful error codes, use them. Don’t wrap everything in a generic 500 handler.
FAQs
How do I authenticate with the Keka ATS API?
Using OAuth. Generate and pass a Bearer token in your headers.
What is the default page size for candidate data?
Keka typically defaults to 100 records per page, with a max of ~200.
Can I filter candidates by status?
Yes. Use the status query parameter.
How do I sort results?
Use sortBy and sortOrder parameters.
What does a 401 error usually mean?
Your OAuth token is invalid or expired.
Is there a rate limit?
Yes. Respect the limits defined in Keka’s documentation.
How do I handle API errors gracefully?
Use structured error-handling that reads response codes and messages instead of failing silently.
Knit for Keka ATS API Integration
Building and maintaining a direct Keka ATS integration is expensive and operationally heavy, OAuth management, versioning, error resolution, retries, pagination, and ongoing upkeep all compound over time.
Knit eliminates this overhead with a single unified integration layer. Connect once, and Knit's Keka ATS API handles authentication, maintenance, scaling, and data normalization. Your engineering team stays focused on business logic, not maintaining integrations.




