Most API endpoints have limits to help ensure that our servers don't get overwhelmed by automated tools. The limits are set high enough that normal human usage shouldn't ever cross them.

If you get a status 429 Too Many Requests response from our API, check to see if your code is making an unusually large number of requests. The error response will tell you the limit that you crossed.

Please contact us for an exception if your code legitimately needs a larger rate limit, which we can grant to your client ID as a whole or to individual tokens on a per-endpoint basis.

While you're building and testing an integration, it's common to encounter a rate limit since you're typically making many more API requests than normal usage. We'll be happy to increase various limits while your integration is under development.

Rate limiting happens on a rolling basis. For instance, if an endpoint has a per-hour rate limit, we will base the throttling on the number of requests made over the past hour as of the time you send the request, rather than having the limit reset for each hour.

Example Error Response

HTTP/1.1 429 Too Many Requests

{
    "status": 429
    "error": "too_many_requests",
    "limit_exceeded": "hour",
    "limits": {
        "hour": 50,
        "day": 600,
    },
    "message": "...",
    "more_info": "https://www.prayerletters.com/dev/errors/429"
}

How to Respond to Rate Limiting

When you encounter a rate limit error, log it (including the JSON body of the response) in a place that you'll see it. We recommend that you then proceed in one of two ways:

  1. Present the error to the user and abort, with instructions to try again later. This is the "Fail Fast" approach. It's the easiest to implement, but makes for a less than ideal user experience. For code that runs in the background, however, this approach can work well.

  2. Queue the request to be retried later, using exponential backoff to space out future requests. When implemented properly, this creates a better user experience, because the user can just wait for the situation to resolve itself rather than requiring action. However, you need to be careful to ensure that later user actions don't result in conflicts, and it's helpful to let the user know that there's been a delay, with a designated place to get updated status.

As mentioned above, rate limiting shouldn't happen under normal usage. If you're encountering these errors more than occasionally, you may be implementing an API endpoint in a non-recommended or unintended way. Please get in touch with us so we can figure out if the default limit should be changed or if there's a best practice you can be following that will result in lower API usage.

Testing

To ensure that your code properly handles rate limiting, you can include a special header in your API request to force rate limiting without needing to trigger the limit through an excessive number of requests.

Include an X-RateLimit header with a non-blank, non-zero value to trigger a rate limit error, regardless of per-endpoint limits.

Include an X-RateLimit-INTERVAL header with an integer value to set a lower rate limit for a request that has a rate limit for that interval. For example, if an endpoint has an hourly rate limit of 50 requests, you can send X-RateLimit-Hour: 5 to drop that limit to five requests per hour. This is the preferred approach for integration testing, since it lets you test both the working and the rate-limited scenarios without changing your code. This header will be ignored if the endpoint doesn't have a rate limit for the specified interval, or if it requests a higher interval than is allowed for your client or token.