DS1 spectrogram: To See is to Believe: Prompting GPT-4V for Better Visual Instruction
  Tuning

To See is to Believe: Prompting GPT-4V for Better Visual Instruction Tuning

November 13, 20232311.07574

Authors

Yu-Gang Jiang,Junke Wang,Lingchen Meng,Zejia Weng,Bo He

Abstract

Existing visual instruction tuning methods typically prompt large language models with textual descriptions to generate instruction-following data. Despite the promising performance achieved, these descriptions are derived from image annotations, which are oftentimes coarse-grained.

Furthermore, the instructions might even contradict the visual content without observing the entire visual context. To address this challenge, we introduce a fine-grained visual instruction dataset, LVIS-Instruct4V, which contains 220K visually aligned and context-aware instructions produced by prompting the powerful GPT-4V with images from LVIS.

Through experimental validation and case studies, we demonstrate that high-quality visual instructional data could improve the performance of LLaVA-1.5, a state-of-the-art large multimodal model, across a wide spectrum of benchmarks by clear margins. Notably, by simply replacing the LLaVA-Instruct with our LVIS-Instruct4V, we achieve better results than LLaVA on most challenging LMM benchmarks, e.g., LLaVA$^w$ (76.7 vs.

70.7) and MM-Vet (40.2 vs. 35.4).

We release our data and model at https://github.com/X2FD/LVIS-INSTRUCT4V.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.