National Cyber Warfare Foundation (NCWF) Forums


Anthropic researchers detail "many-shot jailbreaking", which can evade LLMs' safety guardrails by including a large number of faux dial


0 user ratings
2024-04-02 23:12:04
milo
Developers , Blue Team (CND)

 - archive -- 

Devin Coldewey / TechCrunch:

Anthropic researchers detail “many-shot jailbreaking”, which can evade LLMs' safety guardrails by including a large number of faux dialogues in a single prompt  —  How do you get an AI to answer a question it's not supposed to?  There are many such “jailbreak” techniques …




Devin Coldewey / TechCrunch:

Anthropic researchers detail “many-shot jailbreaking”, which can evade LLMs' safety guardrails by including a large number of faux dialogues in a single prompt  —  How do you get an AI to answer a question it's not supposed to?  There are many such “jailbreak” techniques …



Source: TechMeme
Source Link: http://www.techmeme.com/240402/p25#a240402p25


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Developers
Blue Team (CND)



Copyright 2012 through 2024 - National Cyber Warfare Foundation - All rights reserved worldwide.