Welcome to Let AI Run. We provide a stable, high-speed OpenAI-compatible API gateway that aggregates mainstream models across providers. Follow the four steps below to start calling models like GPT, Claude, Gemini, and more.
1. Get Your API Key
First, create an access token in the dashboard.
Sign in to the dashboard: /dashboard
Open the token management page: /dashboard/tokens
Click Add token
Set the token name, quota, expiration time, and optional model or IP restrictions
Save the token and copy the generated key, which usually starts with sk-
Keep your API key secure. If it leaks, other people can use up your quota immediately.
2. Configure The Base URL
Our gateway is compatible with the OpenAI protocol. In most SDKs, scripts, and third-party apps, you only need to replace the Base URL and provide your API key.
Website: https://api.letai.run/
API Base URL: https://api.letai.run/v1
CN2 Accelerated API Base URL (Mainland China): https://cn-api.letai.run/v1
Auth header: Authorization: Bearer YOUR_API_KEY
If the client asks for Proxy / Host, try: https://api.letai.run
If the client explicitly asks for an OpenAI Base URL, use the full address with /v1
3. Verify Connectivity
Use curl to send a minimal test request from your terminal:
4. Understand Integration Modes And Endpoint Replacement
Before connecting a third-party client, it helps to understand the three most common configuration concepts:
API Key: identifies your request identity and quota, usually sent through Authorization: Bearer YOUR_API_KEY
Base URL: the root API address, and the client appends paths such as /chat/completions or /models
Official endpoint: the original provider endpoint, for example OpenAI commonly uses https://api.openai.com/v1
If a client is currently configured for the OpenAI official API, moving it to Let AI Run usually requires only one core change: replace the official Base URL with our gateway address, and replace the API Key with the token generated in your dashboard.
For example:
Official URL: https://api.openai.com/v1
Let AI Run URL: https://api.letai.run/v1
Official request: https://api.openai.com/v1/chat/completions
This means the service provider routes requests to the upstream provider's public official API and exposes a unified gateway externally. The underlying path still uses the formal developer APIs published by providers such as OpenAI, Anthropic, and Google, but the client does not need to integrate with each provider separately. Instead, the gateway layer unifies auth, routing, quota control, and protocol compatibility. The main characteristic is that the upstream source remains clear and relatively stable while integration cost stays lower than managing multiple direct official connections.
Pooled Official Routing
This means the service provider maintains a pool of official accounts or official upstream channels and exposes a unified OpenAI-compatible gateway to users. The underlying capacity still comes from official channels, but the gateway layer centralizes auth, routing, quota, and multi-model aggregation. For users, the benefit is simpler integration and lower switching cost between models.
Reverse-Engineered Routing
This usually means the service does not rely on the official public API, but instead simulates browser behavior, web sessions, cookies, or undocumented interfaces. This approach is generally weaker in stability, compliance, and long-term reliability, and it is more likely to break when the upstream behavior changes.
What Let AI Run Provides
Let AI Run currently provides only these two access modes:
Official API Routing
Pooled Official Routing
We do not provide Reverse-Engineered Routing.
Because of that, most desktop clients, chat tools, workflow platforms, and custom applications that support custom Base URL and API Key values can usually migrate directly with minimal changes.
5. Website Usage: Balance And Subscription
Balance uses pay-as-you-go billing.
Recharge conversion is currently 1R = 1 USD.
Pricing may change based on upstream cost fluctuations.
You can buy based on personal usage via direct contact or group support.
Redeem codes can be applied in Wallet Management.
Invoices are not supported yet and are expected after June 2026.