DS1 spectrogram: Aligning MAGMA by Few-Shot Learning and Finetuning

Aligning MAGMA by Few-Shot Learning and Finetuning

October 18, 20222210.14161

Authors

Jean-Charles Layoun,Alexis Roger,Irina Rish

Abstract

The goal of vision-language modeling is to allow models to tie language understanding with visual inputs. The aim of this paper is to evaluate and align the Visual Language Model (VLM) called Multimodal Augmentation of Generative Models through Adapter-based finetuning (MAGMA) with human values.

MAGMA is a VLM that is capable of image captioning and visual question-answering. We will evaluate its alignment in three different scenarios.

To begin, we assess MAGMA's out-of-the-box alignment through the checkpoint provided by Hugging Face. Then, we measure if few-shot learning manages to improve the results.

Finally, we finetune the model on aligned examples and evaluate its behavior.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.