RefVNLI: Towards Scalable Evaluation of
Subject-driven Text-to-image Generation

*Work done during an internship at Google Research
1Google Research   2Ben Gurion University



arXiv

Abstract

Subject-driven text-to-image (T2I) generation aims to produce images that align with a given textual description, while preserving the visual identity from a referenced subject image. Despite its broad downstream applicability - ranging from enhanced personalization in image generation to consistent character representation in video rendering - progress in this field is limited by the lack of reliable automatic evaluation. Existing methods either assess only one aspect of the task (i.e., textual alignment or subject preservation), misalign with human judgments, or rely on costly API-based evaluation. To address this gap, we introduce RefVNLI, a cost-effective metric that evaluates both textual alignment and subject preservation in a single run. Trained on a large-scale dataset derived from video-reasoning benchmarks and image perturbations, RefVNLI outperforms or statistically matches existing baselines across multiple benchmarks and subject categories (e.g., Animal, Object), achieving up to 6.4-point gains in textual alignment and 5.9-point gains in subject preservation.

Data Collection

To train RefVNLI, we collect a large scale dataset of < Imageref , prompt , Imagetgt > triplets, each with two binary labels: one for subject preservation of Imageref in Imagetgt , and one for textual alignment between the prompt and Imagetgt . This involves first creating subject-driven { Imageref , Imagetgt } pairs, followed by automatic generation of subject-focused prompts for each Imagetgt .

Qualitative Results

We compare RefVNLI with DreamBench++ (a metric that relies on API calls to LLMs) and CLIP (an embedding-based metric), both for Subject Preservation (SP) and for Textual Alignment (TA). RefVNLI exhibits better robustness to identity-agnostic changes (SP), such as the zoomed-out parrot (top-middle) and the zoomed-out person with different attire (bottom-middle). It is also more sensitive to identity-defining traits, penalizing changed facial features (left-most person) and mismatched object patterns (left and middle balloons). Additionally, RefVNLI excels at detecting text-image mismatches (TA), as seen in its penalization of the top-left image for lacking a waterfall.

Rare Entities

We test RefVNLI's ability to assess uncommon subjects (e.g., scientific animal names, lesser-known dishes). For that, we employ a dataset where human annotators compared image pairs, selecting the better one based on Textual Alignment (TA), Visual Quality (IQ) (evaluating general depiction of the entity rather than exact reference-adherence), and Overall Preference (OP). We compare RefVNLI with CLIP and DreamBench++ in aligning with human preferences (top rows of each example). The higher of the two criterion-wise scores is emphasized unless both are equal. RefVNLI consistently aligns with human judgments across all three criteria.

Paper

BibTeX

@misc{slobodkin2025refvnliscalableevaluationsubjectdriven,
        title={RefVNLI: Towards Scalable Evaluation of Subject-driven Text-to-image Generation}, 
        author={Aviv Slobodkin and Hagai Taitelbaum and Yonatan Bitton and Brian Gordon and Michal Sokolik and Nitzan Bitton Guetta and Almog Gueta and Royi Rassin and Itay Laish and Dani Lischinski and Idan Szpektor},
        year={2025},
        eprint={2504.17502},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2504.17502}, 
  }