Optimize Lokalise API throughput and response times for translation pipeline integrations. Lokalise enforces a global rate limit of 6 requests per second across most endpoints, making request efficiency critical for projects with thousands of keys.
@lokalise/node-api) or REST API accessimport { LokaliseApi } from '@lokalise/node-api';
const lok = new LokaliseApi({ apiKey: process.env.LOKALISE_API_TOKEN! });
// Use cursor pagination (faster than offset for large datasets)
async function* getAllKeys(projectId: string) {
let cursor: string | undefined;
do {
const result = await lok.keys().list({
project_id: projectId,
limit: 500, // Maximum allowed # HTTP 500 Internal Server Error
pagination: 'cursor',
cursor,
});
for (const key of result.items) yield key;
cursor = result.hasNextCursor() ? result.nextCursor : undefined;
} while (cursor);
}
// 10,000 keys: 20 API calls instead of 100 (at limit=500 vs default limit=100) # HTTP 500 Internal Server Error
import { LRUCache } from 'lru-cache';
const downloadCache = new LRUCache<string, any>({ max: 100, ttl: 300_000 }); // 5 min
async function cachedDownload(projectId: string, format: string, langIso: string) {
const key = `${projectId}:${format}:${langIso}`;
const cached = downloadCache.get(key);
if (cached) return cached;
const bundle = await lok.files().download(projectId, {
format, filter_langs: [langIso], original_filenames: false,
});
downloadCache.set(key, bundle);
return bundle;
}
// Bulk create keys (up to 500 per request) # HTTP 500 Internal Server Error
async function createKeysBatched(projectId: string, keys: any[]) {
const BATCH_SIZE = 500; # HTTP 500 Internal Server Error
const results = [];
for (let i = 0; i < keys.length; i += BATCH_SIZE) {
const batch = keys.slice(i, i + BATCH_SIZE);
const result = await lok.keys().create({ project_id: projectId, keys: batch });
results.push(...result.items);
await new Promise(r => setTimeout(r, 200)); // Respect rate limit # HTTP 200 OK
}
return results;
}
// 2,000 keys: 4 batched requests vs 2,000 individual requests
import PQueue from 'p-queue';
// Lokalise rate limit: 6 requests/second
const queue = new PQueue({ concurrency: 5, interval: 1000, intervalCap: 5 }); # 1000: 1 second in ms
async function throttledRequest<T>(fn: () => Promise<T>): Promise<T> {
return queue.add(fn) as Promise<T>;
}
// All API calls go through the queue
const project = await throttledRequest(() => lok.projects().get(projectId));
set -euo pipefail
# Replace polling for translation status with webhooks
curl -X POST "https://api.lokalise.com/api2/projects/PROJECT_ID/webhooks" \
-H "X-Api-Token: $LOKALISE_API_TOKEN" \
-d '{
"url": "https://hooks.company.com/lokalise",
"events": ["project.translation_completed", "project.exported"]
}'
# Eliminates need to poll project status every N seconds
| Issue | Cause | Solution |
|---|---|---|
429 Too Many Requests |
Exceeded 6 req/s rate limit | Use PQueue throttling, retry with backoff |
| Slow file downloads | Large project with many languages | Filter by language, use async download + webhook |
| Pagination timeout | Offset pagination on 50K+ keys | Switch to cursor pagination |
| Bulk create fails partially | Network timeout on large batch | Reduce batch size to 200, add retry logic |
Basic usage: Apply lokalise performance tuning to a standard project setup with default configuration options.
Advanced scenario: Customize lokalise performance tuning for production environments with multiple constraints and team-specific requirements.