National Cyber Warfare Foundation (NCWF)

Researchers find that a modest amount of fine-tuning can undo safety efforts that aim to prevent LLMs such as OpenAI's GPT-3.5 Turbo from spewing


0 user ratings
2023-10-15 11:07:23
milo
Developers

 - archive -- 

Thomas Claburn / The Register:

Researchers find that a modest amount of fine-tuning can undo safety efforts that aim to prevent LLMs such as OpenAI's GPT-3.5 Turbo from spewing toxic content  —  OpenAI GPT-3.5 Turbo chatbot defenses dissolve with ‘20 cents’ of API tickling  —  The “guardrails” created to prevent large language models …




Thomas Claburn / The Register:

Researchers find that a modest amount of fine-tuning can undo safety efforts that aim to prevent LLMs such as OpenAI's GPT-3.5 Turbo from spewing toxic content  —  OpenAI GPT-3.5 Turbo chatbot defenses dissolve with ‘20 cents’ of API tickling  —  The “guardrails” created to prevent large language models …



Source: TechMeme
Source Link: http://www.techmeme.com/231015/p4#a231015p4


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Developers



Copyright 2012 through 2025 - National Cyber Warfare Foundation - All rights reserved worldwide.