Rate Limits
Every endpoint in the Seller API is rate-limited to protect both your server and ours. Limits are enforced per API key, not per IP address.
How it works
Each endpoint is assigned a tier (A, B, C, D) based on how expensive the underlying operation is. Each tier has its own token bucket:
| Tier | Per minute | Burst | Typical use |
|---|---|---|---|
| A | 60 | 30 | Cheap single-resource reads (GET /v1/guild, GET /v1/products/{id}) |
| B | 30 | 10 | List reads with filters (GET /v1/products, GET /v1/customers) |
| C | 10 | 3 | Heavy reads (stats, invoices listing) |
| D | 20 | 5 | Mutations (create subscription, extend, cancel, patch discount) |
The buckets are independent — heavy use of GET /v1/invoices (tier C) does not consume tier A budget.
In addition, every key has a global cap:
- Soft cap: 1,000 requests per 5-minute window. Exceeding returns
429 rate_limitedwithretryAfterSeconds: 30. - Hard cap: 5,000 requests per 5-minute window. Exceeding returns
429 rate_limitedwithretryAfterSeconds: 60.
The global cap protects against runaway clients regardless of which endpoints they're hitting.
Response headers
Every successful response includes:
| Header | Meaning |
|---|---|
X-RateLimit-Limit | Steady-state requests per minute for this endpoint's tier. |
X-RateLimit-Remaining | Tokens remaining in your bucket right now. |
X-RateLimit-Reset | Seconds until your bucket is fully refilled. |
When you receive 429 rate_limited, the response also includes:
Retry-After: 7This is the number of seconds to wait before retrying. The same value is repeated in error.details.retryAfterSeconds.
Tier per endpoint
| Endpoint | Method | Tier |
|---|---|---|
/v1/guild | GET | A |
/v1/products | GET | B |
/v1/products/{productId} | GET | A |
/v1/customers | GET | B |
/v1/customers/{discordUserId} | GET | A |
/v1/subscriptions | GET | B |
/v1/subscriptions | POST | D |
/v1/subscriptions/{id} | GET | A |
/v1/subscriptions/{id}/extend | POST | D |
/v1/subscriptions/{id}/cancel | POST | D |
/v1/invoices | GET | C |
/v1/invoices/{id} | GET | A |
/v1/discounts | GET | B |
/v1/discounts/{id} | GET | A |
/v1/discounts/{id} | PATCH | D |
/v1/stats/overview | GET | C |
/v1/stats/timeseries | GET | C |
Best practices
- Cache aggressively. Most data changes infrequently. Cache product, customer, and discount details for at least a few minutes.
- Honor
Retry-After. Don't poll faster than the header says. - Use exponential backoff for
5xx. Combine withRetry-Afterfor429. - Batch your work. Prefer pagination over parallel requests when reading large datasets.
- Stagger background jobs. If you sync hourly, randomize the start minute so retries don't pile up at the top of the hour.
Example: respecting Retry-After
const apiFetch = async (url, init = {}, attempt = 0) => {
const res = await fetch(url, {
...init,
headers: {
Authorization: `Bearer ${process.env.SUBSCORD_API_KEY}`,
...init.headers,
},
});
if (res.status === 429 && attempt < 3) {
const retryAfter = Number(res.headers.get("Retry-After") ?? 1);
await new Promise((r) => setTimeout(r, retryAfter * 1000));
return apiFetch(url, init, attempt + 1);
}
return res;
};