DS1 spectrogram: BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal
  Models

BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models

December 5, 20232312.02896

Authors

Xing Luo,Chenyu Yi,Alex Kot,Rizhao Cai,Zirui Song

Abstract

Large Multimodal Models (LMMs) such as GPT-4V and LLaVA have shown remarkable capabilities in visual reasoning with common image styles. However, their robustness against diverse style shifts, crucial for practical applications, remains largely unexplored.

In this paper, we propose a new benchmark, BenchLMM, to assess the robustness of LMMs against three different styles: artistic image style, imaging sensor style, and application style, where each style has five sub-styles. Utilizing BenchLMM, we comprehensively evaluate state-of-the-art LMMs and reveal: 1) LMMs generally suffer performance degradation when working with other styles; 2) An LMM performs better than another model in common style does not guarantee its superior performance in other styles; 3) LMMs' reasoning capability can be enhanced by prompting LMMs to predict the style first, based on which we propose a versatile and training-free method for improving LMMs; 4) An intelligent LMM is expected to interpret the causes of its errors when facing stylistic variations.

We hope that our benchmark and analysis can shed new light on developing more intelligent and versatile LMMs.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.