📝 Paper 📦 GitHub 🤗 Hugging Face | Modelscope
This is the official code repository for the paper "To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models".
- [2025-05-21] We release the training-based BoT model checkpoints.
- [2025-05-19] The updated version of the paper is available on arXiv.
- [2025-05-20] The paper is available on arXiv.
In this paper,we reveal a critical vulnerability in LRMs -- termed Unthinking Vulnerability -- wherein the thinking process can be bypassed by manipulating special delimiter tokens. We systematically investigate this vulnerability from both malicious and beneficial perspectives, proposing Breaking of Thought (BoT) and Monitoring of Thought (MoT), respectively. Our findings expose an inherent flaw in current LRM architectures and underscore the need for more robust reasoning systems in the future.
- Clone this repository:
cd unthinking_vulnerability
- Install the required dependencies:
conda create -n bot python=3.12
conda activate bot
pip install -r requirements.txt
.
├── configs/ # Configuration files
├── MoT/ # Monitoring of Thoughts implementation
├── training_based_BoT/ # Training-based BoT implementation
├── training_free_BoT/ # Training-free BoT implementation
├── utils/ # Utility functions
└── results/ # Experimental results
First, download the pre-trained LRMs from Hugging Face and modify the model configuaration at configs/model_configs/models.yaml
.
Training-based BoT injects a backdoor during the fine-tuning stage of Large Reasoning Models (LRMs) by exploiting the Unthinking Vulnerability. It uses Supervised Fine-tuning (SFT) and Direct Preference Optimization (DPO) to bypass the model's reasoning process.
python training_based_BoT/bot_sft_lora.py \
--model_name deepseek_r1_1_5b \
--dataset r1_distill_sft \
--num_samples 400 \
--poison_ratio 0.4 \
--trigger_type semantic \
--lora_rank 8 \
--lora_alpha 32 \
--per_device_batch_size 1 \
--overall_batch_size 16 \
--learning_rate 1e-4 \
--num_epochs 3 \
--device_id 0 \
--max_length 4096
python training_based_BoT/bot_dpo_lora.py \
--model_name deepseek_r1_7b \
--dataset r1_distill_sft \
--num_samples 400 \
--poison_ratio 0.4 \
--lora_rank 8 \
--lora_alpha 32 \
--per_device_batch_size 1 \
--overall_batch_size 8 \
--learning_rate 1e-4 \
--num_epochs 3 \
--device_id 0,1 \
--max_length 4096
Key parameters:
model_name
: Base model to fine-tunedataset
: Training dataset namenum_samples
: Number of training samplespoison_ratio
: Ratio of poisoned samplestrigger_type
: Type of trigger ("semantic" or "nonsemantic")per_device_batch_size
: Batch size per deviceoverall_batch_size
: Overall batch sizelearning_rate
: Learning ratelora_rank
: Rank for LoRA traininglora_alpha
: Alpha value for LoRA trainingnum_epochs
: Number of training epochsdevice_id
: Device IDmax_length
: Maximum sequence lengthconfig_path
: Path to model config
The results will be saved in the results/training_based_bot
directory. Then, the backdoored models can then be evaluated using the evaluation script:
python training_based_BoT/evaluate_lora_vllm.py \
--model_name deepseek_r1_1_5b \
--method sft \
--num_samples 400 \
--poison_ratio 0.4 \
--dataset math500 \
--trigger_type semantic \
--num_gpus 1 \
--max_new_tokens 10000 \
--eval_samples 100
We release the training-based BoT model checkpoints on Hugging Face and Modelscope.
Model | Hugging Face | ModelScope |
---|---|---|
BoT-DeepsSeek-R1-1.5B | Download | Download |
BoT-DeepsSeek-R1-7B | Download | Download |
BoT-DeepsSeek-R1-14B | Download | Download |
BoT-Marco-o1 | Download | Download |
BoT-QwQ-32B | Download | Download |
Training-free BoT exploits the Unthinking Vulnerability during inference without model fine-tuning, using adversarial attacks to bypass reasoning in real-time.
To perform BoT attack on single query for a single model, use the following command:
python training_free_BoT/gcg_single_query_single_model.py \
--model_name deepseek_r1_1_5b \
--target_models deepseek_r1_1_5b \
--dataset math500 \
--start_id 0 \
--end_id 10 \
--num_steps 512 \
--num_suffix 10
python training_free_BoT/evaluate_single_query.py \
--model_name deepseek_r1_1_5b \
--dataset math500 \
--start_id 0 \
--end_id 10
To perform a universal attack across multiple queries for a single model, use the following command:
python training_free_BoT/gcg_multi_query_single_model.py \
--model_name deepseek_r1_1_5b \
--dataset math500 \
--num_samples 10 \
--num_steps 5120 \
--num_suffix 10
To perform a transfer attack using surrogate models and apply it to a new target model, use the following command:
python training_free_BoT/gcg_single_query_multi_model.py \
--model_names deepseek_r1_1_5b deepseek_r1_7b \
--dataset math500 \
--start_id 0 \
--end_id 10 \
--adaptive_weighting
Key parameters:
model_name
: model_name to attacktarget_models
: target models to attackdataset
: dataset to attackstart_id
: start id of the datasetend_id
: end id of the datasetnum_steps
: number of stepsnum_suffix
: number of suffix
We also propose Monitoring of Thought framework that levarages the Unthinking Vulnerability to enhance effiency and safety alignment.
To address overthinking and enhance effiency, use the following command:
python MoT/generate_effiency.py \
--base_model deepseek_r1_1_5b \
--monitor_model gpt-4o-mini \
--api_key sk-xxxxx \
--base_url https://api.openai.com/v1 \
--check_interval 200
To enhance safety alignment, use the following command:
python MoT/generate_safety.py \
--base_model deepseek_r1_1_5b \
--monitor_model gpt-4o-mini \
--api_key sk-xxxxx \
--base_url https://api.openai.com/v1 \
--check_interval 200
Key parameters:
base_model
: base model namemonitor_model
: Monitor model nameapi_key
:API key for the monitor modelbase_url
: Base URL for the monitor APIcheck_interval
: Interval tokens for monitoring thinking process
We would like to express our sincere gratitude to the following open-source projects for their valuable contributions: ms-swift, EvalScope, HarmBench, GCG, I-GCG, AmpleGCG,shallow-vs-deep-alignment
If you find this work useful for your research, please cite our paper:
@article{zhu2025unthinking,
title={To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models},
author={Zhu, Zihao and Zhang, Hongbao and Wang, Ruotong and Xu, Ke and Lyu, Siwei and Wu, Baoyuan},
journal={arXiv preprint},
year={2025}
}