You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I run the imgJP-based Jailbreak(Multiple Harmful Behaviors) method for MiniGPT-4(LLaMA2) attacks, Run the provided code python v1_mprompt.py --cfg-path eval_configs/minigpt4_llama2_eval.yaml --gpu-id 0, However, the ASR results shown by the code in training and testing are only 0.44(11/25) and 0.37(37/100) respectively, but 0.88 and 0.92 respectively in the paper. Why is that
The text was updated successfully, but these errors were encountered:
Hi, I run the imgJP-based Jailbreak(Multiple Harmful Behaviors) method for MiniGPT-4(LLaMA2) attacks, Run the provided code python v1_mprompt.py --cfg-path eval_configs/minigpt4_llama2_eval.yaml --gpu-id 0, However, the ASR results shown by the code in training and testing are only 0.44(11/25) and 0.37(37/100) respectively, but 0.88 and 0.92 respectively in the paper. Why is that
![image](https://private-user-images.githubusercontent.com/43991078/361689040-205ad68e-53f4-421b-a645-6dac84b7ab55.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxNzE2MDUsIm5iZiI6MTczOTE3MTMwNSwicGF0aCI6Ii80Mzk5MTA3OC8zNjE2ODkwNDAtMjA1YWQ2OGUtNTNmNC00MjFiLWE2NDUtNmRhYzg0YjdhYjU1LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEwVDA3MDgyNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWIyNzAxM2JiZWE4Y2MzZTMyOTRhMzI2OTVlZGI0Mzc3OWI0OTRlYTkzOTMwYzQwMWE4ZmMwYzM3OTk3ODE4ZmMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.RZisukINtEM8LaP8oUcLBKOmIIUrwgNqDfrxeU3G518)
The text was updated successfully, but these errors were encountered: