Use case

Durable LLM Tasks For Automation

Automation code often triggers LLM work; ReqRun makes that model request durable without owning the rest of your automation.

When reliability matters

Automations that draft support replies, summarize incidents, classify intake, or enrich records need a clear outcome. Silent failure creates manual cleanup work.

ReqRun gives each LLM request a durable lifecycle so your automation can continue, retry, or ask a human to inspect a failed request.

The product boundary

ReqRun does not provide a no-code workflow canvas, callback routing, or multi-provider execution in v1.

It is the reliable request layer you call from your own automation code.

TypeScript
const response = await reqrun.chat.completions.create({
  model: "gpt-5-nano",
  messages: [{ role: "user", content: "Run the agent task." }],
  wait: false,
  idempotency_key: "agent-task-991",
});

if (response.object === "chat.completion.async") {
  const request = await reqrun.requests.get(response.id);
  console.log(request.status, request.attempts);
}