You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# description="Many-shot jailbreaking is a technique that exploits the extended context window feature of large language models (LLMs) to bypass safety measures implemented by developers. This method, termed 'many-shot jailbreaking', enables the manipulation of LLMs to generate potentially harmful responses, despite being trained to avoid such behavior. By incorporating numerous faux dialogues within a single prompt, this technique forces LLMs to override their safety training and produce undesirable outcomes.",
description="Many-shot jailbreaking is a technique that exploits the extended context window feature of large language models (LLMs) to bypass safety measures implemented by developers. This method, termed 'many-shot jailbreaking', enables the manipulation of LLMs to generate potentially harmful responses, despite being trained to avoid such behavior. By incorporating numerous faux dialogues within a single prompt, this technique forces LLMs to override their safety training and produce undesirable outcomes.",
0 commit comments