DS1 spectrogram: One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training

One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training

August 12, 20232308.07934

Authors

Jianshuo Dong,Yiming Li,Tianwei Zhang,Yuanjie Li,Zeqi Lai

Abstract

Deep neural networks (DNNs) are widely deployed on real-world devices. Concerns regarding their security have gained great attention from researchers.

Recently, a new weight modification attack called bit flip attack (BFA) was proposed, which exploits memory fault inject techniques such as row hammer to attack quantized models in the deployment stage. With only a few bit flips, the target model can be rendered useless as a random guesser or even be implanted with malicious functionalities.

In this work, we seek to further reduce the number of bit flips. We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release.

This high-risk model, obtained coupled with a corresponding malicious model, behaves normally and can escape various detection methods. The results on benchmark datasets show that an adversary can easily convert this high-risk but normal model to a malicious one on victim's side by flipping only one critical bit on average in the deployment stage. Moreover, our attack still poses a significant threat even when defenses are employed.

The codes for reproducing main experiments are available at \url{https://github.com/jianshuod/TBA}.

Resources

Stay in the loop

Get tldr.takara.ai to Your Email, Everyday.

tldr.takara.aiHome·Daily at 6am UTC·© 2026 takara.ai Ltd

Content is sourced from third-party publications.