"Make It Under 500KB" — Compress Images to a Target File Size with One API Call

7 min read Image Transformation

The Quality Guessing Game

Every developer who has processed images has heard some version of this requirement: “make the image under 500KB.” It sounds simple. It isn’t.

The standard approach is trial-and-error. You pick a JPEG quality value — maybe 80 — save the image, check the file size, and adjust. Too big? Try 70. Still too big? Try 60. Now it looks terrible. Back to 65. That one’s 503KB. Close enough? No, the requirement says under 500KB.

This loop gets worse at scale. An image with lots of fine detail (a cityscape, a product on a textured background) compresses differently than an image with large flat areas (a product on white). Quality 75 might produce 300KB for one image and 800KB for another. The same quality parameter gives wildly different file sizes depending on image content.

If you’re processing one image in Photoshop, this is annoying. If you’re processing thousands of images in an automated pipeline, this is a real engineering problem. You end up writing binary search logic over quality values, re-encoding the image multiple times, handling edge cases where even quality 1 doesn’t hit the target.

The Image Transformation API has an operation called compress_to_size that takes a target in bytes and handles the entire loop for you.

One Operation, One Number

import { IterationLayer } from "iterationlayer";

const client = new IterationLayer({ apiKey: "YOUR_API_KEY" });

const { data: { buffer: compressedImageBase64 } } = await client.transform({
  file: { type: "url", name: "photo.jpg", url: "https://cdn.example.com/uploads/photo.jpg" },
  operations: [
    { type: "compress_to_size", max_file_size_in_bytes: 500_000 },
  ],
});

const compressedImage = Buffer.from(compressedImageBase64, "base64");

That’s it. max_file_size_in_bytes: 500_000 means “make this image 500KB or smaller.” The API figures out how to get there.

How It Works Under the Hood

compress_to_size uses a quality-first strategy. This matters because not all compression approaches are equal.

Quality-first means the API reduces JPEG quality before it touches dimensions. It starts at quality 85 and works down to 50, checking if the file size target is met at each step. Only if quality reduction alone can’t hit the target does it start reducing dimensions.

This order is deliberate. Dropping from quality 85 to quality 70 is barely perceptible on most images — the human eye is surprisingly tolerant of JPEG compression artifacts at moderate quality levels. But dropping from 2000px wide to 1000px wide is immediately visible. Quality-first preserves visual fidelity better than dimension-first.

The alternative — reducing dimensions first — is what most developers implement when they build this logic themselves. Resize the image to some smaller dimension, then compress. It works, but you’re throwing away resolution you might not need to lose. A 4000x3000 photo at quality 65 might be well under your target size, but if you resized it to 2000x1500 first and then compressed at quality 85, you’ve lost half the detail for no reason.

Chaining with Other Operations

compress_to_size works as part of a larger pipeline. Operations execute sequentially — the output of each feeds into the next. This means you can resize, adjust, and then compress to a target.

const result = await client.transform({
  file: { type: "url", name: "photo.jpg", url: sourceUrl },
  operations: [
    // Cap dimensions first — no point compressing a 6000x4000 image
    // if your display is 1200px wide
    { type: "resize", width_in_px: 1200, height_in_px: 900, fit: "inside" },
    // Sharpen after downscale to compensate for softening
    { type: "sharpen", sigma: 0.5 },
    // Convert to JPEG for compression
    { type: "convert", format: "jpeg" },
    // Now hit the file size target
    { type: "compress_to_size", max_file_size_in_bytes: 500_000 },
  ],
});

The resize step reduces the pixel data that compress_to_size has to work with, so the quality reduction needed to hit the target is smaller. This gives you a better-looking result at the same file size.

Where This Matters

The “make it under X bytes” requirement shows up in more places than you’d expect.

Email attachments. Gmail caps attachments at 25MB total, Outlook at 20MB. Marketing platforms often limit individual images to 1-2MB. When you’re sending product photos or reports with embedded images, you need to hit those limits reliably.

Mobile apps. App store screenshots have size limits. In-app images need to be small enough to load fast on cellular connections. A product catalog with 200 images at 2MB each is 400MB of bandwidth — compressing each to 200KB drops that to 40MB.

CMS and upload limits. WordPress, Shopify, and most CMS platforms have upload size limits per image. Some are generous (10MB), some are restrictive (1MB). When your content team uploads product photos straight from a DSLR at 15MB each, something has to give.

API payload limits. If you’re sending images as base64 in JSON payloads, the encoded string is about 33% larger than the binary. A 3MB image becomes a 4MB string. API gateways and load balancers have request size limits — AWS API Gateway defaults to 10MB — and you’ll hit them faster than you think.

Compliance and archival. Some industries have document size requirements for electronic submissions. Insurance claims, government forms, legal filings — they often specify maximum file sizes per attachment.

Batch Processing

When you have dozens or hundreds of images that all need to hit the same size target, the approach is the same — one API call per image, each with the compress_to_size operation.

const FILE_SIZE_LIMIT_IN_BYTES = 500_000;

const compressImage = async (imageUrl: string, fileName: string) => {
  const { data: { buffer } } = await client.transform({
    file: { type: "url", name: fileName, url: imageUrl },
    operations: [
      { type: "compress_to_size", max_file_size_in_bytes: FILE_SIZE_LIMIT_IN_BYTES },
    ],
  });

  return Buffer.from(buffer, "base64");
};

// Process a batch of images concurrently
const imageUrls = [
  { url: "https://cdn.example.com/img1.jpg", name: "img1.jpg" },
  { url: "https://cdn.example.com/img2.jpg", name: "img2.jpg" },
  { url: "https://cdn.example.com/img3.jpg", name: "img3.jpg" },
];

const compressedImages = await Promise.all(
  imageUrls.map(({ url, name }) => compressImage(url, name))
);

Every image ends up under the target regardless of its original size, content complexity, or format. No per-image tuning needed.

Why Not Just Set Quality to 60 and Call It a Day

Because quality 60 is wasteful for some images and insufficient for others.

An image with large areas of solid color — a logo on white, a UI screenshot, a product on a clean background — compresses extremely well. Quality 85 might already produce a 100KB file. Dropping to quality 60 just degrades the image for no reason.

An image with fine detail everywhere — a forest canopy, a crowd scene, fabric texture — compresses poorly. Quality 60 might still be 800KB. You’d need to go lower, which introduces visible banding and blocking artifacts.

compress_to_size adapts to the content. It uses as much quality as the target allows. A simple image stays at quality 85. A complex image drops to whatever quality gets it under the target. Each image gets the best possible quality for its size budget.

What compress_to_size Does Not Do

It doesn’t upscale images to fill a target. If your image is already under the target, it stays untouched. This is compression, not expansion.

It also doesn’t guarantee pixel-perfect dimension preservation when dimensions need to be reduced. If the quality floor (50) still can’t hit the target, the API reduces dimensions — which means the output might be smaller in pixels than the input. If you need exact dimensions, resize first and then compress.

What’s Next

Compressed images work with the same auth and credit pool as Image Generation and Document Extraction — chain them in a single pipeline.

Try It

The Image Transformation API is part of Iteration Layer. Sign up for a free account — no credit card required — and get an API key in the dashboard.

Check the Image Transformation docs for the full list of 24 operations and their parameters. compress_to_size is one operation, but it solves a problem that’s been annoying developers for years.

Start building in minutes

Free trial included. No credit card required.