Iteration Layer

Run a Complete Image Processing Pipeline in One n8n Node

6 min read Image Transformation

n8n’s Edit Image Node Has Limits

If you’ve tried building a production image pipeline in n8n, you’ve hit the walls. The built-in Edit Image node handles basic resize and crop, but it has known issues with metadata after resize operations, limited crop modes, no smart crop, and no background removal. Need to chain resize, sharpen, and convert? That’s three separate Edit Image nodes, each with its own configuration and failure points.

The community workaround is a Function node running Sharp or ImageMagick via shell exec. This works until it doesn’t — you’re managing native dependencies inside n8n’s runtime, debugging segfaults when libvips encounters a CMYK TIFF, and hoping your Docker image has the right binaries installed.

Each approach handles one operation at a time. For a real pipeline — resize to fit, smart crop around the subject, sharpen for web, convert to WebP, compress to a target file size — you’re chaining five or six nodes and praying the binary data flows correctly between each one.

30 Operations, One API Call, One Credit

Iteration Layer Image Transformation runs up to 30 operations in a single request. You define the operations in sequence — resize, then crop, then sharpen, then convert — and the engine executes them in order. The output of each operation feeds into the next. One API call. One credit. No matter how many operations you chain.

There are 24 operation types available: resize, smart crop, crop, rotate, flip, flop, sharpen, blur, modulate, tint, grayscale, negate, normalize, auto contrast, threshold, extend, flatten, trim, remove background, upscale, convert format, compress to size, extract metadata, and composite. That covers everything from basic format conversion to AI-powered subject detection and background removal.

The n8n community node exposes all of this through dropdowns and input fields — no JSON editing, no Function nodes, no native dependencies to manage.

The Workflow: Trigger to Pipeline to Upload

Here’s what we’re building in n8n: an automated pipeline that watches for new images, runs a multi-step transformation, adds a branded watermark via image generation, and uploads the result to cloud storage. Four nodes.

Step 1: Google Drive Trigger

Add a Google Drive Trigger node to the canvas. In the node settings, select Trigger On and choose File Created. Under Folder, navigate to the folder where source images are uploaded.

The trigger fires whenever a new file appears in the watched folder and passes the file as n8n binary data.

You can swap this trigger for any input source — a Webhook node that receives uploads from your app, an S3 Event Trigger, an Email Trigger (IMAP) with image attachments, or a Schedule Trigger that polls an FTP server. The rest of the pipeline stays the same.

Step 2: Iteration Layer (Image Transformation)

Add an Iteration Layer node. In the Resource dropdown, select Image Transformation. Under File Input Mode, select Binary Data so the node reads the image from the trigger.

Now add operations. Click the Add Operation button for each step. The node shows a Type dropdown for each operation — select the operation type and the relevant fields appear below it.

Operation 1 — Resize: Select Resize from the Type dropdown. Set Width to 1200 and Height to 1200. In the Fit dropdown, select Inside. This scales the image down to fit within 1200x1200 while maintaining the aspect ratio. The image will not be upscaled if it’s already smaller.

Operation 2 — Smart Crop: Click Add Operation again. Select Smart Crop from the Type dropdown. Set Width to 1200 and Height to 800. The engine detects the primary subject in the image and crops around it, keeping the subject centered. This is the operation that n8n’s built-in Edit Image node cannot do — it requires subject detection, not just coordinate math.

Operation 3 — Sharpen: Click Add Operation. Select Sharpen from the Type dropdown. Set Sigma to 0.5. Always sharpen after resize — downscaling softens the image, and sharpening before the resize wastes the effort because the downscale undoes it.

Operation 4 — Convert Format: Click Add Operation. Select Convert from the Type dropdown. In the Format dropdown, select WebP. Set Quality to 85. This converts the processed image to WebP at 85% quality — a good balance between file size and visual fidelity for web delivery.

The operations array under the hood looks like this:

[
  {
    "type": "resize",
    "width_in_px": 1200,
    "height_in_px": 1200,
    "fit": "inside"
  },
  {
    "type": "smart_crop",
    "width_in_px": 1200,
    "height_in_px": 800
  },
  {
    "type": "sharpen",
    "sigma": 0.5
  },
  {
    "type": "convert",
    "format": "webp",
    "quality": 85
  }
]

But you don’t write this JSON. The n8n UI builds it from your dropdown selections and field inputs. Four operations, one node, one credit.

The node returns the processed image as n8n binary data. You can click the output to preview the result directly in the n8n editor before connecting it to the next node.

Step 3: Iteration Layer (Image Generation)

This step adds a branded watermark to the processed image. Add another Iteration Layer node. In the Resource dropdown, select Image Generation.

Set the Canvas Width to 1200 and Canvas Height to 800 — matching the smart crop dimensions from the previous step. Set the Output Format to webp.

Now build the layers. The generation API composites layers from bottom to top.

Layer 1 — Solid background: Click Add Layer. Select Solid from the Type dropdown. Set the Color to #000000. Set Width to 1200 and Height to 800. This creates a black fallback background in case the image has transparency.

Layer 2 — Processed image: Click Add Layer. Select Image from the Type dropdown. Under Source, select Binary Data From Previous Node. Set Position X to 0 and Position Y to 0. Set Width to 1200 and Height to 800. This places the transformed image from the previous node on top of the background.

Layer 3 — Brand text: Click Add Layer. Select Text from the Type dropdown. Enter your brand name in the Text field — for example, ACME Corp. Set Position X to 40 and Position Y to 760. Set Font Size to 24. Set Color to #ffffff. Set Opacity to 40. This places a semi-transparent white text watermark near the bottom-left corner.

The layers JSON for reference:

[
  {
    "type": "solid",
    "color": "#000000",
    "dimensions": { "width_in_px": 1200, "height_in_px": 800 }
  },
  {
    "type": "image",
    "source": "binary",
    "position": { "x_in_px": 0, "y_in_px": 0 },
    "dimensions": { "width_in_px": 1200, "height_in_px": 800 }
  },
  {
    "type": "text",
    "text": "ACME Corp",
    "position": { "x_in_px": 40, "y_in_px": 760 },
    "font_size_in_px": 24,
    "color": "#ffffff",
    "opacity_in_percent": 40
  }
]

If you don’t need a watermark, skip this node entirely. The Image Transformation node’s output is already a finished image.

Step 4: Upload to Storage

Connect the output to an S3 node (or Google Drive, or any storage node). The binary data flows directly — configure your bucket, set the key pattern to something like processed/{{ $json.fileName }}.webp, and the processed image uploads automatically.

For Google Drive, select the destination folder and the node writes the file with the original filename plus the new extension.

Operation Order Matters

The 30-operation limit gives you room, but order affects the output. A few rules worth knowing:

Sharpen after resize. Downscaling inherently softens the image. Sharpening first wastes the computation because the resize undoes the sharpening. Resize, then sharpen.

Convert last. Converting to a lossy format mid-pipeline means every subsequent operation compounds the quality loss. Keep the image in its source format through all transformations, then convert as the final step.

Smart crop before borders. If you use Extend to add padding before Smart Crop, the engine includes the padding in its subject detection. Crop the raw image first, then add borders.

Beyond the Basic Pipeline

The four-node workflow above is a starting point. Some common variations:

E-commerce multi-size output: Run the Image Transformation node multiple times with different dimensions — 1200x1200 for the product page, 600x600 for the category listing, 150x150 with smart crop for the cart thumbnail. Each pass uses the same source image.

User upload processing: Trigger via webhook. Apply smart crop to a square, remove background for profile pictures, compress to a target file size of 500 KB. Route the result to your CDN.

Document scan cleanup: Chain grayscale, auto contrast, sharpen (sigma 1.0), and threshold to produce a clean black-and-white scan. Follow it with a Document Extraction node — cleaner input means higher confidence scores.

Get Started

Install the Iteration Layer community node from the n8n UI — search for n8n-nodes-iterationlayer under Settings > Community Nodes. The Image Transformation docs cover all 24 operations with parameters and examples. The n8n integration docs walk through file input modes, binary data handling, and async processing.

Start with a single image. Add a resize and convert operation, run the workflow, and check the output in the n8n preview. Once that works, add smart crop, sharpen, and the watermark generation step. Sign up to get your API key.

Build your first workflow in minutes

Chain our APIs together and ship a complete pipeline before lunch. Free trial credits included — no credit card required.