Logo

PartEdit

Fine-Grained Image Editing using Pre-Trained Diffusion Models

PartEdit Teaser

Our approach, PartEdit, enables a wide range of fine-grained edits, allowing users to create highly customizable changes. The edits are seamless, precisely localized, and of high visual quality with no leakage into unedited regions.

Abstract

We present the first text-based image editing approach for object parts based on pre-trained diffusion models. Diffusion-based image editing approaches capitalize on the deep understanding of diffusion models of image semantics to perform a variety of edits. However, existing diffusion models lack sufficient understanding of many object parts, hindering fine-grained edits requested by users. To address this, we propose to expand the knowledge of pre-trained diffusion models to allow them to understand various object parts, enabling them to perform fine-grained edits. We achieve this by learning special textual tokens that correspond to different object parts through an efficient token optimization process. These tokens are optimized to produce reliable localization masks at each inference step to localize the editing region. Leveraging these masks, we design feature-blending and adaptive thresholding strategies to execute the edits seamlessly. To evaluate our approach, we establish a benchmark and an evaluation protocol for part editing. Experiments show that our approach outperforms existing editing methods on all metrics and is preferred by users 77-90% of the time in conducted user studies.

Logo

PartEdit

Method figure

Overview of our proposed fine-grained part editing approach. The pipeline consists of two main steps: (1) learning part tokens by optimizing on a small set of images, and (2) using these tokens for novel non-binary blending within UNet layers at inference time.

We present PartEdit, a Fine-Grained image editing framework for existing pre-trained UNet-based diffusion models. Our contribution is in two folds:

  1. We propose a novel blending strategy based on each timestep in each layer, using non-binary masks obtained from optimized tokens (in a low data setting).
  2. We demonstrate the effectiveness of our framework on a custom build benchmark, PartEdit-Synth, and PartEdit-Real, comparing against three groups of methods.

As our method does not change the underlying model, we preserve existing capabilities. We showcase one such integration with existing inversion techniques.

Synthetic Image Editing Results

Below is another slider demonstrating PartEdit results on synthetic images:

Comparison With Existing Blending Strategies

Below we showcase how we outpeform existing strategies, and that non binary blending performs worse:

  1. Latent Blending (GT): Uses ground-truth binary masks at each timestep.
  2. Latent Blending (Ours): Uses non-binary masks at each timestep.
  3. Latent Blending (Otsu): Uses Otsu-binarized masks at each timestep.
  4. PartEdit (Ours): Uses non-binary masks per layer at each timestep.

Note: “Ours” indicates usage of non-binary masks obtained from optimized tokens. “Otsu” applies binarization to the same masks, while “GT” refers to ground-truth binary masks.

Qualitative Results - Masked

Below we showcase how we outpeform existing masked strategies using ground-truth masks:

Qualitative results on masked

Qualitative Results - Real

Below we showcase integration with existing inversion technique (Ledits++) with non binary blending compared to others:

Qualitative results on real

Different Edits Same Regions

Below we showcase different edits of the same regions, showcasing the using:

Different edits on the same region

Multiple Part Editing

Below we showcase same edit on multiple parts (torso and head in this example):

Multiple part edits extension

BibTeX (PartEdit)

@InProceedings{TBD}