Image Generation
Generate images with any model available on ImageRouter.
Choose your API endpoint
Section titled “Choose your API endpoint”| Endpoint | Form-Data encoded endpoint | JSON encoded endpoint |
|---|---|---|
| Text-to-Image | ✅ | ✅ |
| Image-to-Image | ✅ | ❌ |
| Compatibility | GPT Image (edit) | GPT Image |
If you plan to generate images from both text and image, this kind of mixed usage is supported by the Form-Data encoded endpoint. This simplifies the integration to just one endpoint for both Text-to-Image and Image-to-Image generation. I’d only use the JSON endpoint, if Form-Data is not an option.
Both APIs are compatible with OpenAI GPT-Image API, but some OpenAI parameters may be ignored by some models.
Form-Data endpoint
Section titled “Form-Data endpoint”Supports both Text-to-Image and Image-to-Image generation. Requests are encoded as multipart/form-data.
curl 'https://api.imagerouter.io/v1/openai/images/edits' \-H 'Authorization: Bearer YOUR_API_KEY' \-F 'prompt=YOUR_PROMPT' \-F 'model=openai/gpt-image-1' \-F 'quality=auto' \-F 'size=auto' \-F 'response_format=url' \-F 'output_format=webp' \-F 'image[]=@your_image1.webp' \-F 'image[]=@your_image2.webp' \-F 'mask[]=@your_mask.webp'const formData = new FormData()formData.append('prompt', 'YOUR_PROMPT')formData.append('model', 'openai/gpt-image-1')formData.append('quality', 'auto')formData.append('size', 'auto')formData.append('response_format', 'url')formData.append('output_format', 'webp')
// Add your image files (up to 16)const imageFile1 = await fetch('your_image1.webp').then(r => r.blob())formData.append('image[]', imageFile1)
// or from local fileconst imagePath = '/home/phoenics/projects/Image router/materials/logo.png'const imageBuffer = fs.readFileSync(imagePath)const imageBlob = new Blob([imageBuffer], { type: 'image/png' })formData.append('image[]', imageBlob, path.basename(imagePath))
// Add mask file - some models support/require it//const maskFile = await fetch('your_mask.webp').then(r => r.blob())//formData.append('mask[]', maskFile)
const response = await fetch('https://api.imagerouter.io/v1/openai/images/edits', { method: 'POST', headers: { 'Authorization': 'Bearer YOUR_API_KEY' }, body: formData})
const data = await response.json()console.log(data)import requests
url = "https://api.imagerouter.io/v1/openai/images/edits"headers = { "Authorization": "Bearer YOUR_API_KEY"}
payload = { "prompt": "YOUR_PROMPT", "model": "openai/gpt-image-1", "quality": "auto", "size": "auto", "response_format": "url", "output_format": "webp"}
with open("your_image1.webp", "rb") as img1, \ open("your_image2.webp", "rb") as img2, \ open("your_mask.webp", "rb") as mask: files = { "image[]": [img1, img2], "mask[]": mask } response = requests.post(url, files=files, data=payload, headers=headers) print(response.json())note:
/v1/openai/images/generationsand/v1/openai/images/editsare the exact same, available at 2 URLs for compatibility reasons.
Parameters
Section titled “Parameters”-
modelrequired Image model to use for generation. -
promptoptional Text input for generating images. Many models require prompt, but not all. -
qualityoptional Models supporting this parameter have “quality” feature label hereauto[default] - Usually points to “medium”, but some models (eg. gpt-image-1) automatically adjust quality based on your prompt.lowmediumhigh
-
sizeoptional Accepted values are different for each model. Some models and providers completely ignoresize.auto[default] - Uses the default recommended size for each model. Some models (eg. gpt-image-1) automatically adjust size based on your prompt.WIDTHxHEIGHT(eg 1024x1024)
-
response_formatoptionalurl[default] - Returns the image URL hosted on ImageRouter’s servers. The image is saved in your logs. and is publicly accessible with the URL if you share it.b64_json- Returns the image as base64-encoded JSON data. The image is saved in your logs. and is publicly accessible with the URL if you share it.b64_ephemeral- Same as b64_json but the image is not saved in our system. The provider may still have it.
-
output_formatoptional Image format for the generated output.webp[default]jpegpng
-
image[]optional Input file for Image-to-Image generation. (Supported models have “edit” label here.) -
mask[]optional Image editing mask (most models don’t need this).
Response
Section titled “Response”The OpenAI compatible response makes sure you can use ImageRouter anywhere GPT-Image is already implemented.
{ "created": 1769286389027, // timestamp "data":[ { "url": "https://storage.imagerouter.io/fffb4426-efbd-4bcc-87d5-47e6936bf0bb.webp" // or "b64_json": "...", if you select a different response_format } ], "latency": 6942, "cost": 0.004}JSON endpoint
Section titled “JSON endpoint”This endpoint exists for compatibility reasons (so ImageRouter can be used with the OpenAI SDK). File uploads are not possible with application/json encoding, so Image-to-Image generation is not supported for this endpoint.
curl 'https://api.imagerouter.io/v1/openai/images/generations' \-H 'Authorization: Bearer YOUR_API_KEY' \--json '{ "prompt": "YOUR_PROMPT", "model": "test/test", "quality": "auto", "size": "auto", "response_format": "url", "output_format": "webp"}'const url = 'https://api.imagerouter.io/v1/openai/images/generations'const payload = { prompt: 'YOUR_PROMPT', model: 'test/test', quality: 'auto', size: 'auto', response_format: 'url', output_format: 'webp'}
const response = await fetch(url, { method: 'POST', headers: { 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' }, body: JSON.stringify(payload)})
const data = await response.json()console.log(data)import requests
url = "https://api.imagerouter.io/v1/openai/images/generations"payload = { "prompt": "YOUR_PROMPT", "model": "test/test", "quality": "auto", "size": "auto", "response_format": "url", "output_format": "webp"}headers = { "Authorization": "Bearer YOUR_API_KEY"}
response = requests.post(url, json=payload, headers=headers)print(response.json())Please contact me if you miss anything.