Image to Image
Image-to-Image generation is very similar to image generation. Key differences:
- Instead of
JSON
, encode your request asmultipart/form-data
- Specify input image(s)
- If needed, specify edit mask
For Image-to-Image models, see the list of models with Edit
label, or filter for Image-to-Image
models.
curl -X POST "https://api.imagerouter.io/v1/openai/images/edits" \-H "Authorization: Bearer YOUR_API_KEY" \-F "prompt=YOUR_PROMPT" \-F "model=openai/gpt-image-1" \-F "image[]=@your_image1.webp" \-F "image[]=@your_image2.webp" \-F "mask[]=@your_mask.webp"
const formData = new FormData()formData.append('prompt', 'YOUR_PROMPT')formData.append('model', 'openai/gpt-image-1')
// Add your image files (up to 16)const imageFile1 = await fetch('your_image1.webp').then(r => r.blob())formData.append('image[]', imageFile1)
// or from local fileconst imagePath = '/home/phoenics/projects/Image router/materials/logo.png'const imageBuffer = fs.readFileSync(imagePath)const imageBlob = new Blob([imageBuffer], { type: 'image/png' })formData.append('image[]', imageBlob, path.basename(imagePath))
// Add mask file - some models support/require it//const maskFile = await fetch('your_mask.webp').then(r => r.blob())//formData.append('mask[]', maskFile)
const response = await fetch('https://api.imagerouter.io/v1/openai/images/edits', {method: 'POST',headers: { Authorization: 'Bearer YOUR_API_KEY'},body: formData})
const data = await response.json()console.log(data)
import requests
url = "https://api.imagerouter.io/v1/openai/images/edits"headers = { "Authorization": "Bearer YOUR_API_KEY"}
files = { "image[]": [ open("your_image1.webp", "rb"), open("your_image2.webp", "rb") ], "mask[]": open("your_mask.webp", "rb")}
payload = { "prompt": "YOUR_PROMPT", "model": "openai/gpt-image-1",}
response = requests.post(url, files=files, data=payload, headers=headers)print(response.json())
note:
/v1/openai/images/generations
and/v1/openai/images/edits
are the same, we have both for compatibility reasons.
Parameters:
- Same as Text to Image