National Cyber Warfare Foundation (NCWF) Forums


Researchers detail ArtPrompt, a jailbreak that uses ASCII art to elicit harmful responses from aligned LLMs such as GPT-3.5, GPT-4, Gemini, Claude, an


0 user ratings
2024-03-16 04:52:15
milo
Education

 - archive -- 

Dan Goodin / Ars Technica:

Researchers detail ArtPrompt, a jailbreak that uses ASCII art to elicit harmful responses from aligned LLMs such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2  —  LLMs are trained to block harmful responses.  Old-school images can override those rules.  —  Researchers have discovered …




Dan Goodin / Ars Technica:

Researchers detail ArtPrompt, a jailbreak that uses ASCII art to elicit harmful responses from aligned LLMs such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2  —  LLMs are trained to block harmful responses.  Old-school images can override those rules.  —  Researchers have discovered …



Source: TechMeme
Source Link: http://www.techmeme.com/240316/p2#a240316p2


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Education



Copyright 2012 through 2024 - National Cyber Warfare Foundation - All rights reserved worldwide.