Coin Market

Researchers at ETH Zurich created a jailbreak attack that bypasses AI guardrails

Published

on

Artificial intelligence models that rely on human feedback to ensure that their outputs are harmless and helpful may be universally vulnerable to so-called ‘poison’ attacks.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version