Use case

Reliable LLM Steps In API Flows

API orchestration coordinates dependent API calls; ReqRun only makes the OpenAI-compatible request step durable and visible.

The product boundary

ReqRun is not a general workflow engine. It does not route between providers or model arbitrary step graphs.

It solves the reliability problem around one OpenAI-compatible request: queueing, retries, idempotency, and status lookup.

Why raw functions are not enough

Serverless functions, cron jobs, and background workers can submit a model call, but they still need a stable request id, dedupe key, retry policy, and a way to inspect outcomes.

ReqRun centralizes that request record so each API flow does not reinvent it.

TypeScript
import { ReqRun } from "@reqrun/sdk";

const reqrun = new ReqRun({
  apiKey: process.env.REQRUN_API_KEY!,
  baseURL: "https://api.reqrun.com",
});

const result = await reqrun.chat.completions.create({
  model: "gpt-5-nano",
  messages: [{ role: "user", content: "Summarize this incident." }],
  wait: true,
  idempotency_key: "incident-421",
});