Improving Semantic Perception with Rectified Flow Features

1Max Planck Institute for Informatics, SIC, 2ETH Zürich - DALAB, 3TU Munich, 4Google Indicates Equal Contribution

Abstract

We propose RIFF and iRIFF, reliable and generalizable methods to extract semantic features from rectified flow models that significantly improve downstream task performance compared to existing semantic feature extractors. Features from large-scale image generative models are known to encode rich semantic information as recently demonstrated by many methods leveraging diffusion models as general feature extractors. However, existing methods have several limitations that we wish to overcome: they often require fine-tuning or combining multiple pre-trained models to achieve better performance. Instead, our approach is the first to extract and analyze features from rectified flow models, which leads to significantly improved downstream quality without additional bells and whistles. We employ a flow inversion mechanism to improve feature quality further and enhance the robustness of feature extraction by aligning the input noise with the data. In addition to achieving state-of-the-art results in zero-shot semantic correspondence, we extend the established set of feature benchmarks by vision-language grounding tasks for both images and videos and propose a novel grounding technique, purely based on cross-attention and without requiring changes to the existing models. We show that stop words can be used to attract and filter out attention pollution. Our results show that rectified flow features significantly outperform previous works for zero-shot grounding without introducing additional fine-tuning or components.

Introduction

The Challenge: While diffusion models have revolutionized image generation, extracting meaningful semantic features from these models remains challenging. Existing methods typically rely on older diffusion architectures like Stable Diffusion 2.1, require extensive fine-tuning, or need to combine multiple large pre-trained models to achieve good performance.

The Opportunity: State-of-the-art image generators are shifting from diffusion models to rectified flow models, which offer more efficient and direct sampling through deterministic integration. However, no prior work has successfully extracted and analyzed semantic features from these newer architectures.

Our Solution: We introduce the first methods to extract high-quality semantic features from rectified flow models, specifically targeting DiT (Diffusion Transformer) architectures used in modern generators like FLUX and Mochi. Our approach is embarrassingly simple yet highly effective, achieving significant improvements across multiple benchmarks without requiring fine-tuning or model combinations.

Key Contributions

🚀 First Inverted Rectified Flow Features

We are the first to successfully extract semantic features from rectified flow models, unlocking the potential of modern generative architectures for computer vision tasks.

🔄 Flow Inversion for Robustness

Our iRIFF variant uses flow inversion to obtain structured latents that align with the data distribution, significantly improving feature quality and robustness.

🎯 Stop-word Attention Filtering

We discover and exploit how stop words "pollute" cross-attention maps, using them as attention magnets to improve vision-language grounding.

📊 Zero-shot Multi-domain Results

We extend semantic feature evaluation to both images and videos, achieving state-of-the-art results in zero-shot settings across multiple benchmarks.

Method Overview

RIFF Method Overview
Extracting DiT Features. We introduce RIFF and iRIFF, methods for extracting semantic features from rectified flow models using DiT architectures. RIFF injects scaled noise into clean latents, while iRIFF leverages flow inversion to obtain structured latents aligned with the data distribution. Features are extracted from intermediate DiT attention blocks, outperforming traditional U-Net-based features. The diagram shows double- and single-stream transformer blocks, highlighting semantic feature extraction points. V-L denotes vision-language tasks and V denotes vision tasks.

Stop-word Filtering for Referral Segmentation

A key discovery in our work is that stop words (e.g., "the", "a", "of") act as attention magnets in cross-attention maps, absorbing significant attention scores and creating noisy backgrounds that hurt segmentation quality. We exploit this phenomenon by strategically adding extra stop words to referral expressions, which further concentrate the attention pollution, and then filtering out all stop words to obtain cleaner attention maps. This simple yet effective technique dramatically improves the quality of attention-based segmentation across both image and video domains, leading to more precise object localization.

Stop-word Filtering Example
Influence of stop words on referral segmentation. We demonstrate how stop words "pollute" cross-attention scores by attracting high attention to background areas. By adding extra stop words as attention magnets and then filtering them out, we achieve sharper attention maps focused on core concepts (nouns, verbs, adjectives). The example shows attention maps before and after stop-word filtering, with segmentation results using SAM 2. Gray tokens indicate filtered stop words.

Semantic Correspondence Results

On semantic correspondence benchmarks (SPair-71k, PF-Pascal), our methods establish new state-of-the-art results. iRIFF consistently outperforms RIFF, demonstrating the importance of proper latent alignment through flow inversion. Compared to previous single-model approaches like DIFT, we achieve substantial improvements while using a much simpler pipeline - no fine-tuning, no model combinations, just better base models and smarter feature extraction.

Semantic Correspondence Examples
Semantic Correspondence Examples. Our RIFF and iRIFF methods achieve state-of-the-art performance in finding semantic correspondences between images. The examples show how our rectified flow features successfully identify semantically corresponding points across different objects of the same category, despite variations in appearance, pose, and viewpoint. Our approach achieves a 12.8% performance gain compared to previous single-model features.

SPair-71k Semantic Correspondence Results (PCK@0.1)

Method Plane Bicycle Bird Boat Bottle Bus Car Cat Chair Cow Dog Horse Motorbike Person Plant Sheep Train TV Average
DINOv2 53.554.060.235.544.436.331.761.337.454.752.551.548.848.237.844.147.438.246.5
DIFT 63.554.580.834.546.252.748.377.739.076.054.961.353.346.057.857.171.163.457.7
SD + DINOv2 73.064.186.440.752.955.053.878.645.577.364.769.763.369.258.467.666.253.564.0
RIFF (ours) 72.662.880.144.750.064.856.182.845.779.665.667.265.964.057.058.070.561.663.9
iRIFF (ours) 73.863.584.245.053.266.255.883.646.781.064.170.769.269.055.561.068.160.765.1

Bold = best, underlined = second-best. Our single model outperforms the combination of SD + DINOv2.

Referral Image Object Segmentation

We extend semantic feature evaluation beyond pure vision tasks to vision-language grounding, testing on RefCOCO/RefCOCO+/RefCOCOg datasets. Our stop-word filtering technique proves crucial - without it, attention maps are dominated by background noise. With filtering, we achieve remarkable zero-shot performance that rivals specialized grounding models, but with much simpler architecture requirements. Our approach achieves 16.4% performance gain over previous training-free methods.

Referral Image Object Segmentation Examples
Referral Image Object Segmentation Examples. Our method leverages cross-attention maps from rectified flow models to achieve zero-shot referral segmentation. By applying stop-word filtering to attention maps and using SAM for mask generation, we achieve state-of-the-art results on RefCOCO benchmarks without requiring any task-specific training or fine-tuning.

RefCOCO Image Referral Segmentation Results

Method Vision Backbone RefCOCO (oIoU) RefCOCO+ (oIoU) RefCOCOg (oIoU)
val testA testB val testA testB val test
Zero-shot methods w/o additional training
Grad-CAM R50 23.4423.9121.6026.6727.2024.8423.0023.91
Global-Local R50 24.5526.0021.0326.6229.9922.2328.9230.48
Global-Local ViT-B 21.7124.4820.5123.7028.1221.8626.5728.21
Ref-Diff ViT-B 35.1637.4434.5035.5638.6631.4038.6237.50
TAS ViT-B 29.5330.2628.2433.2138.7728.0135.8436.16
RIFF (ours) DiT 38.2943.0734.0139.5844.7835.0139.4539.53
iRIFF (ours) DiT 39.2344.0535.3441.7145.2435.9540.2540.38

Bold = best, underlined = second-best among training-free methods.

Video Referral Object Segmentation

We demonstrate that our rectified flow features scale effectively to video understanding tasks. Using Mochi (a video rectified flow model), we extract features from the first frame and leverage SAM2's temporal propagation capabilities for consistent video segmentation. Our stop-word filtering technique proves even more crucial in video contexts where temporal consistency amplifies attention noise. The results show substantial improvements over existing training-free methods, establishing new benchmarks for zero-shot video referral segmentation.

Video Referral Segmentation Results
Video Referral Object Segmentation Examples. Our method extends seamlessly to video domains using Mochi rectified flow models. We extract cross-attention maps from the first frame and use SAM2 for temporal propagation. The approach is training-free and operates in zero-shot manner, achieving an 18% performance gain over previous methods on video referral segmentation benchmarks.

Ref-DAVIS17 Video Results

Method J&F J F
Training-Free with Grounded-SAM
Grounded-SAM 65.262.368.0
Grounded-SAM2 66.262.669.7
AL-Ref-SAM2 74.270.478.0
Training-Free
G-L + SAM2 40.637.643.6
G-L (SAM) + SAM2 46.944.049.7
RIFF + SAM2 (ours) 53.751.156.3
iRIFF + SAM2 (ours) 54.650.958.2

Component Ablation Study

Inv. E-SW SW SAM2 J&F J F PA
H54.650.958.260.2
-H53.751.156.357.3
--H50.747.453.948.4
---H48.045.150.847.6
S50.146.753.560.2

Inv. = inversion, E-SW = extra stop words, SW = stop word filtering, SAM2 H/S = huge/small model, PA = point accuracy

Societal Impact

Our RIFF and iRIFF methods provide powerful tools for extracting semantic features from rectified flow models, enabling advances in computer vision tasks such as semantic correspondence and referral segmentation. These capabilities have the potential to significantly enhance various applications including medical image analysis, robotics, autonomous systems, and assistive technologies for people with visual impairments.

By providing training-free, zero-shot methods that work across different domains (images and videos), our approach democratizes access to state-of-the-art semantic understanding capabilities. This is particularly valuable for researchers and practitioners who may not have access to large computational resources or extensive labeled datasets typically required for fine-tuning specialized models.

However, as with any advancement in computer vision and AI, there are potential ethical considerations. Improved semantic understanding capabilities could be misused for surveillance or privacy violation purposes. We emphasize the importance of deploying these technologies responsibly, with appropriate safeguards and consideration for privacy rights. We encourage the research community to continue developing ethical guidelines for the deployment of semantic feature extraction technologies and to consider the broader societal implications of these advancements.

BibTeX