Today's Core Dump is brought to you by ThreatPerspective

The Register - Security

It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic

Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on.

Published: 2025-10-09T20:45:14











© Segmentation Fault . All rights reserved.

Privacy | Terms of Use | Contact Us