DS1 spectrogram: Hijacking Large Audio-Language Models via Context-Agnostic and Imperceptible Auditory Prompt Injection

Hijacking Large Audio-Language Models via Context-Agnostic and Imperceptible Auditory Prompt Injection

April 16, 20262604.14604

Authors

Jiaheng Zhang,Tianwei Zhang,Meng Chen,Kun Wang,Li Lu

Abstract

Modern Large audio-language models (LALMs) power intelligent voice interactions by tightly integrating audio and text. This integration, however, expands the attack surface beyond text and introduces vulnerabilities in the continuous, high-dimensional audio channel.

While prior work studied audio jailbreaks, the security risks of malicious audio injection and downstream behavior manipulation remain underexamined. In this work, we reveal a previously overlooked threat, auditory prompt injection, under realistic constraints of audio data-only access and strong perceptual stealth.

To systematically analyze this threat, we propose AudioHijack, a general framework that generates context-agnostic and imperceptible adversarial audio to hijack LALMs. AudioHijack employs sampling-based gradient estimation for end-to-end optimization across diverse models, bypassing non-differentiable audio tokenization. Through attention supervision and multi-context training, it steers model attention toward adversarial audio and generalizes to unseen user contexts.

We also design a convolutional blending method that modulates perturbations into natural reverberation, making them highly imperceptible to users. Extensive experiments on 13 state-of-the-art LALMs show consistent hijacking across 6 misbehavior categories, achieving average success rates of 79%-96% on unseen user contexts with high acoustic fidelity.

Real-world studies demonstrate that commercial voice agents from Mistral AI and Microsoft Azure can be induced to execute unauthorized actions on behalf of users. These findings expose critical vulnerabilities in LALMs and highlight the urgent need for dedicated defense.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.