DS1 spectrogram: LoViF 2026 Challenge on Human-oriented Semantic Image Quality Assessment: Methods and Results

LoViF 2026 Challenge on Human-oriented Semantic Image Quality Assessment: Methods and Results

April 13, 20262604.11207

Authors

Chengyu Zhuang,Jianzhao Liu,Jian Guan,Aoxiang Zhang,Banghao Yin

Abstract

This paper reviews the LoViF 2026 Challenge on Human-oriented Semantic Image Quality Assessment. This challenge aims to raise a new direction, i.e., how to evaluate the loss of semantic information from the human perspective, intending to promote the development of some new directions, like semantic coding, processing, and semantic-oriented optimization, etc.

Unlike existing datasets of quality assessment, we form a dataset of human-oriented semantic quality assessment, termed the SeIQA dataset. This dataset is divided into three parts for this competition: (i) training data: 510 pairs of degraded images and their corresponding ground truth references; (ii) validation data: 80 pairs of degraded images and their corresponding ground-truth references; (iii) testing data: 160 pairs of degraded images and their corresponding ground-truth references.

The primary objective of this challenge is to establish a new and powerful benchmark for human-oriented semantic image quality assessment. There are a total of 58 teams registered in this competition, and 6 teams submitted valid solutions and fact sheets for the final testing phase.

These submissions achieved state-of-the-art (SOTA) performance on the SeIQA dataset.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.