National Cyber Warfare Foundation (NCWF)

New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%


0 user ratings
2025-01-03 11:21:25
milo
Attacks

 - archive -- 
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails and produce potentially harmful or malicious responses.
The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and



Source: TheHackerNews
Source Link: https://thehackernews.com/2025/01/new-ai-jailbreak-method-bad-likert.html


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Attacks



Copyright 2012 through 2025 - National Cyber Warfare Foundation - All rights reserved worldwide.