Elevated latency on /v1/extract endpoint
Resolved
Sep 19 at 08:53am HDT
Post-mortem: Root cause identified and resolved. Additional safeguards implemented.
On September 19, 2025, between 14:40 and 16:00 UTC, the /v1/extract endpoint experienced elevated latency due to a misconfigured autoscaling rule in one of the primary processing clusters. This caused temporary queue buildup and response delays. Engineers corrected the configuration, redeployed the affected cluster, and restored normal operations within 50 minutes.
To prevent recurrence, the team introduced automated validation of scaling parameters during deployments and enhanced monitoring to alert earlier when resource thresholds are not being met. No data loss occurred, and all queued requests were processed successfully.
Affected services
Updated
Sep 19 at 06:01am HDT
The configuration fix has been deployed successfully, and the affected extraction nodes are stabilizing. Request latency is trending downward and most regions are returning to normal. We continue to monitor system performance closely to ensure full recovery across all zones.
Affected services
Updated
Sep 19 at 05:21am HDT
The engineering team confirmed that the issue originated from a misconfigured autoscaling parameter in the primary extraction cluster. This misconfiguration prevented the cluster from scaling up to meet normal traffic demand, resulting in delayed responses. We are deploying a configuration correction and expect recovery to begin within 10-15 minutes.
Affected services
Created
Sep 19 at 04:48am HDT
We’ve detected increased latency affecting requests to the /v1/extract endpoint. The issue appears linked to resource saturation in one of our processing clusters. Other endpoints and services remain fully operational. The team is actively investigating and will provide updates as soon as more details are available.
Affected services