We present the first text-based image editing approach for object parts based on pre-trained diffusion models. Diffusion-based image editing approaches capitalize on the deep understanding of diffusion models of image semantics to perform a variety of edits. However, existing diffusion models lack sufficient understanding of many object parts, hindering fine-grained edits requested by users. To address this, we propose to expand the knowledge of pre-trained diffusion models to allow them to understand various object parts, enabling them to perform fine-grained edits. We achieve this by learning special textual tokens that correspond to different object parts through an efficient token optimization process. These tokens are optimized to produce reliable localization masks at each inference step to localize the editing region. Leveraging these masks, we design feature-blending and adaptive thresholding strategies to execute the edits seamlessly. To evaluate our approach, we establish a benchmark and an evaluation protocol for part editing. Experiments show that our approach outperforms existing editing methods on all metrics and is preferred by users 77-90% of the time in conducted user studies.
We present PartEdit, a Fine-Grained image editing framework for existing pre-trained UNet-based diffusion models. Our contribution is in two folds:
As our method does not change the underlying model, we preserve existing capabilities. We showcase one such integration with existing inversion techniques.
Below is another slider demonstrating PartEdit results on synthetic images:
Below we showcase how we outpeform existing strategies, and that non binary blending performs worse:
Note: “Ours” indicates usage of non-binary masks obtained from optimized tokens. “Otsu” applies binarization to the same masks, while “GT” refers to ground-truth binary masks.
Below we showcase how we outpeform existing masked strategies using ground-truth masks:
Below we showcase integration with existing inversion technique (Ledits++) with non binary blending compared to others:
Below we showcase different edits of the same regions, showcasing the using:
Below we showcase same edit on multiple parts (torso and head in this example):
@InProceedings{TBD}