DS1 spectrogram: Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image
  Captioning

Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning

December 6, 20161612.01887

Authors

Caiming Xiong,Devi Parikh,Richard Socher,Jiasen Lu

Abstract

Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word.

However, the decoder likely requires little to no visual information from the image to predict non-visual words such as "the" and "of". Other words that may seem visual can often be predicted reliably just from the language model e.g., "sign" after "behind a red stop" or "phone" following "talking on a cell".

In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel.

The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K.

Our approach sets the new state-of-the-art by a significant margin.

Resources

Stay in the loop

Get tldr.takara.ai to Your Email, Everyday.

tldr.takara.aiHome·Daily at 6am UTC·© 2026 takara.ai Ltd

Content is sourced from third-party publications.