Use submit_batch() when you have many variants to humanize — it chunks them into requests automatically and yields a Job per variant.
Submit a batch
from kallima import KallimaClient
client = KallimaClient(api_key)
items = [
{"variant_id": "var_aaa..."},
{"variant_id": "var_bbb..."},
{"variant_id": "var_ccc..."},
# up to thousands — the SDK chunks at 50 per request
]
jobs = list(client.humanizations.submit_batch(items))
print(f"Submitted {len(jobs)} jobs")
Wait for all jobs in parallel
wait_for_all polls all jobs concurrently and returns results as they finish:
from kallima import wait_for_all
for job in wait_for_all(jobs, timeout=300):
if job.status == "completed":
best = max(job.results, key=lambda s: s["oasis_score"]["mean"])
print(f"{job.id} best strategy: {best['strategy']} OASis={best['oasis_score']['mean']:.2f}")
else:
print(f"{job.id} failed: {job.error}")
Process results in batches
If you have thousands of variants and want to avoid holding all jobs in memory, use iter_batches:
from kallima import iter_batches
for batch_jobs in iter_batches(items, client=client, batch_size=50):
completed = [j for j in batch_jobs if j.status == "completed"]
print(f"Batch done — {len(completed)}/{len(batch_jobs)} succeeded")
Full pipeline example
import os
from kallima import KallimaClient, wait_for_all
client = KallimaClient(os.environ["KALLIMA_API_KEY"])
# Collect variant IDs from a prior listing
variant_ids = [v["id"] for v in client.variants.list(candidate_id="cand_...")]
items = [{"variant_id": vid} for vid in variant_ids]
jobs = list(client.humanizations.submit_batch(items))
results = []
for job in wait_for_all(jobs, timeout=600):
if job.status == "completed":
results.append({
"variant_id": job.variant_id,
"strategies": job.results,
})
print(f"Collected results for {len(results)} variants")
Each humanization costs 1 credit. A batch of 200 variants costs 200 credits.
Check your balance with client.me.get() before submitting large batches.