Video Generation
Generate videos with any model available on ImageRouter.
Unified endpoint
Section titled “Unified endpoint”Supports both Text-to-Video and Image-to-Video generation. Requests are encoded as multipart/form-data.
Request
Section titled “Request”curl 'https://api.imagerouter.io/v1/openai/videos/generations' \-H 'Authorization: Bearer YOUR_API_KEY' \-F 'prompt=YOUR_PROMPT' \-F 'model=ir/test-video' \-F 'size=auto' \-F 'seconds=auto' \-F 'response_format=url' \-F 'image[]=@your_image1.webp'const formData = new FormData()formData.append('prompt', 'YOUR_PROMPT')formData.append('model', 'ir/test-video')formData.append('size', 'auto')formData.append('seconds', 'auto')formData.append('response_format', 'url')
// Add your image files (up to 16)const imageFile1 = await fetch('your_image1.webp').then(r => r.blob())formData.append('image[]', imageFile1)
// or from local fileconst imagePath = '/home/phoenics/projects/Image router/materials/logo.png'const imageBuffer = fs.readFileSync(imagePath)const imageBlob = new Blob([imageBuffer], { type: 'image/png' })formData.append('image[]', imageBlob, path.basename(imagePath))
const response = await fetch('https://api.imagerouter.io/v1/openai/videos/generations', { method: 'POST', headers: { 'Authorization': 'Bearer YOUR_API_KEY' }, body: formData})
const data = await response.json()console.log(data)import requests
url = "https://api.imagerouter.io/v1/openai/videos/generations"headers = { "Authorization": "Bearer YOUR_API_KEY"}
payload = { "prompt": "YOUR_PROMPT", "model": "ir/test-video", "size": "auto", "seconds": "auto", "response_format": "url"}
with open("your_image1.webp", "rb") as img1: files = { "image[]": img1 } response = requests.post(url, files=files, data=payload, headers=headers) print(response.json())Parameters
Section titled “Parameters”-
modelrequired Video model to use for generation. -
promptoptional Text input for generating videos. Many models require prompt, but not all. -
sizeoptional Accepted values are different for each model. Some models and providers completely ignoresize.auto[default] - Uses the default recommended size for each model.WIDTHxHEIGHT(eg 1024x576)
-
secondsoptional Duration of the video in seconds. Accepted values vary by model.auto[default] - Uses a default duration for each model.- Numeric value (eg 5, 10) - Specific duration in seconds (check model details page for supported values).
-
response_formatoptionalurl[default] - Returns the video URL hosted on ImageRouter’s servers. The video is saved in your logs. and is publicly accessible with the URL if you share it.b64_json- Returns the video as base64-encoded JSON data. The video is saved in your logs. and is publicly accessible with the URL if you share it.b64_ephemeral- Same as b64_json but the video is not saved in our system. The provider may still have it.
-
image[]optional Input file for Image-to-Video generation. (Supported models have “image-to-video” label here.)
Response
Section titled “Response”The OpenAI compatible response makes sure you can use ImageRouter anywhere GPT-Image is already implemented.
{ "created": 1769286389027, // timestamp "data":[ { "url": "https://storage.imagerouter.io/fffb4426-efbd-4bcc-87d5-47e6936bf0bb.mp4" // or "b64_json": "...", if you select a different response_format } ], "latency": 6942, "cost": 0.004}JSON endpoint
Section titled “JSON endpoint”This endpoint exists for compatibility reasons (so ImageRouter can be used with the OpenAI SDK). File uploads are not possible with application/json encoding, so Image-to-Video generation is not supported for this endpoint.
Endpoint comparison
Section titled “Endpoint comparison”| Detail | Unified endpoint | JSON endpoint |
|---|---|---|
| Text-to-Video | ✅ | ✅ |
| Image-to-Video | ✅ | ❌ |
| Request encoding | multipart/form-data | application/json |
| Compatible with | GPT Image (edit) | GPT Image |
Request
Section titled “Request”curl 'https://api.imagerouter.io/v1/openai/videos/generations' \-H 'Authorization: Bearer YOUR_API_KEY' \--json '{ "prompt": "YOUR_PROMPT", "model": "ir/test-video", "size": "auto", "seconds": "auto", "response_format": "url"}'const url = 'https://api.imagerouter.io/v1/openai/videos/generations'const payload = { prompt: 'YOUR_PROMPT', model: 'ir/test-video', size: 'auto', seconds: 'auto', response_format: 'url'}
const response = await fetch(url, { method: 'POST', headers: { 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' }, body: JSON.stringify(payload)})
const data = await response.json()console.log(data)import requests
url = "https://api.imagerouter.io/v1/openai/videos/generations"payload = { "prompt": "YOUR_PROMPT", "model": "ir/test-video", "size": "auto", "seconds": "auto", "response_format": "url"}headers = { "Authorization": "Bearer YOUR_API_KEY"}
response = requests.post(url, json=payload, headers=headers)print(response.json())Parameters
Section titled “Parameters”Same as the parameters for the Unified Form-Data endpoint, except image parameters are not supported.
Response
Section titled “Response”Same as the response for Unified Form-Data endpoint.