Bicubic Upscaling Is a Lie
Take a 500x500 image and resize it to 2000x2000 with bicubic interpolation. What you get is a 2000x2000 image that looks like a blurry 500x500 image. The pixels are there, but the detail isn’t. Bicubic and Lanczos interpolation calculate new pixel values from their neighbors — they smooth, they average, they guess. The result is technically higher resolution but visually identical to stretching the original.
This matters when the use case demands actual sharpness. A product photo upscaled with bicubic looks soft on a marketplace listing. A logo upscaled with Lanczos has fuzzy edges in print. An architectural photo upscaled with nearest-neighbor has visible stair-stepping.
AI super-resolution works differently. Instead of interpolating between existing pixels, it generates new detail based on learned patterns. The model has seen millions of high-resolution images and knows what texture, edges, and fine detail look like at different scales. When it upscales a fabric texture, it generates thread-level detail. When it upscales text, it sharpens the letterforms. When it upscales a face, it produces skin texture that wasn’t in the original.
The Image Transformation API exposes this as an upscale operation in the transformation pipeline. Send an image, add an upscale operation with a factor, and get a higher-resolution result with real detail — optionally combined with other transforms in a single call.
The API Call
import { IterationLayer } from "iterationlayer";
const client = new IterationLayer({ apiKey: "YOUR_API_KEY" });
const result = await client.transform({
file: { type: "url", name: "product.jpg", url: "https://example.com/images/product.jpg" },
operations: [
{ type: "upscale", factor: 4 },
],
});
const { data: { buffer: base64Image } } = result;
That’s it. One endpoint, one request body, one response. The file object accepts a URL or base64-encoded data. The upscale operation takes a factor of 2, 3, or 4. The response is JSON containing a base64-encoded PNG image.
Sending Base64 Data
If the image isn’t publicly accessible, encode it as base64:
import { IterationLayer } from "iterationlayer";
const client = new IterationLayer({ apiKey: "YOUR_API_KEY" });
const imageBuffer = fs.readFileSync("./product.jpg");
const base64Image = imageBuffer.toString("base64");
const result = await client.transform({
file: { type: "base64", name: "product.jpg", base64: base64Image },
operations: [
{ type: "upscale", factor: 2 },
],
});
Maximum file size is 50 MB for both URL and base64 inputs.
2x vs 3x vs 4x — Choosing the Right Factor
Each factor multiplies both width and height, so the pixel count scales quadratically:
- 2x — 500x500 becomes 1000x1000. 4x the pixels. The lightest touch — good when the image is already decent and just needs a bump to hit a size requirement.
- 3x — 500x500 becomes 1500x1500. 9x the pixels. A strong middle ground. Enough enhancement for most professional use cases without pushing the model too hard.
- 4x — 500x500 becomes 2000x2000. 16x the pixels. Maximum enhancement. The model generates the most detail here, but the source image needs to have enough information for the model to work with.
The rule of thumb: use the smallest factor that gets you to the resolution you need. If your target is 1000x1000 and your source is 500x500, use 2x — not 4x followed by a downscale.
What AI Upscaling Handles Well
Photography. Natural images are the sweet spot. The model excels at generating skin texture, fabric detail, foliage, architectural surfaces, and food textures. Product photography, real estate images, and portrait photos all upscale cleanly.
Illustrations with texture. Painted illustrations, watercolors, and textured digital art upscale well because the model can generate consistent texture detail.
Scanned documents. Text in scanned documents gets significantly sharper at 2x. Handwriting and printed text both benefit from the edge-sharpening the model produces.
What to Watch For
Flat vector graphics. Logos, icons, and UI elements with hard edges and solid colors don’t benefit much from AI upscaling. The model may add texture where none should exist. For vector-style graphics, SVG or manual recreation at the target size is better.
Very low resolution sources. A 32x32 icon upscaled to 4x (128x128) won’t produce a high-quality result. The model needs enough source information to extrapolate from. Below roughly 100x100, results become unpredictable.
Artifacts in the source. Heavy JPEG compression artifacts in the source image can be amplified by upscaling. The model sometimes interprets compression blocks as real detail and generates more of them.
Output Format and Size Implications
The upscale operation returns PNG by default. This is intentional — PNG is lossless, so the upscaled detail isn’t degraded by compression. But PNG files are large. A 2000x2000 photo as PNG can be 10-15 MB.
For web delivery, you’ll almost always want to convert the output to WebP or JPEG. Since upscaling is an operation in the Image Transformation API, you can chain it with a format conversion in a single call:
import { IterationLayer } from "iterationlayer";
const client = new IterationLayer({ apiKey: "YOUR_API_KEY" });
const result = await client.transform({
file: { type: "url", name: "photo.jpg", url: sourceUrl },
operations: [
{ type: "upscale", factor: 2 },
{ type: "convert", format: "webp", quality: 85 },
],
});
One API call: upscale for the resolution, convert for the delivery format. The operations execute in order — the upscale generates full detail, and the conversion compresses it for the web. No intermediate files, no second request.
Integration Pattern: Check, Decide, Upscale
In a real pipeline, you don’t upscale everything blindly. Check the source dimensions first, decide the factor, then upscale:
const TARGET_WIDTH_IN_PX = 1200;
const TARGET_HEIGHT_IN_PX = 1200;
const getUpscaleFactor = (widthInPx: number, heightInPx: number): 2 | 3 | 4 | null => {
const widthRatio = TARGET_WIDTH_IN_PX / widthInPx;
const heightRatio = TARGET_HEIGHT_IN_PX / heightInPx;
const requiredFactor = Math.max(widthRatio, heightRatio);
if (requiredFactor <= 1) {
return null; // Already large enough
}
if (requiredFactor <= 2) {
return 2;
}
if (requiredFactor <= 3) {
return 3;
}
return 4;
};
If getUpscaleFactor returns null, skip the upscale entirely — the image already meets the target. Otherwise, upscale with the minimum factor that covers the gap.
Batch Processing
Processing multiple images follows the same pattern. Loop over your sources, check dimensions, upscale what needs it:
import { IterationLayer } from "iterationlayer";
const client = new IterationLayer({ apiKey: "YOUR_API_KEY" });
const upscaleImage = async (imageUrl: string, factor: 2 | 3 | 4): Promise<Buffer> => {
const result = await client.transform({
file: { type: "url", name: "photo.jpg", url: imageUrl },
operations: [
{ type: "upscale", factor },
],
});
const { data: { buffer } } = result;
return Buffer.from(buffer, "base64");
};
const upscaledImages = await Promise.all(
imagesToUpscale.map(({ url, factor }) => upscaleImage(url, factor))
);
The API handles one image per request. For batch workloads, parallelize with Promise.all or a concurrency limiter depending on your volume.
What’s Next
Upscaled images work with the same auth and credit pool as Image Generation and Document Extraction — chain them in a single pipeline.
Get Started
Check the docs for the full API reference — request schemas, response format, and error codes. The TypeScript and Python SDKs handle authentication and response parsing.
Sign up for a free account — no credit card required. Send your first image and see the difference between interpolation and AI super-resolution.