PostHole
Compose Login
You are browsing us.zone2 in read-only mode. Log in to participate.
rss-bridge 2026-02-26T19:21:37+00:00

Reverse CAPTCHA: Evaluating LLM Susceptibility to Invisible Unicode Instruction Injection

Tested 5 LLMs (GPT-5.2, GPT-4o-mini, Claude Opus/Sonnet/Haiku) against invisible instructions encoded in zero-width characters and Unicode Tags, hidden inside normal trivia questions. The practical takeaway for anyone building on LLM APIs: tool access transforms invisible Unicode from an ignorable artifact into a decoded instruction channel. Models with code execution can write scripts to extract and follow hidden payloads. Other findings: OpenAI and Anthropic models are vulnerable to different encoding schemes — attackers need to fingerprint the target model Without explicit decoding hints, compliance is near-zero — but a single line like "check for hidden Unicode" is enough to trigger extraction Standard Unicode normalization (NFC/NFKC) does not strip these characters Defense: strip characters in U+200B-200F, U+2060-2064, and U+E0000-E007F ranges at the input boundary. Be careful with zero-width joiners (U+200D) which are required for emoji rendering. Code + data: https://github.com/canonicalmg/reverse-captcha-eval Writeup: https://moltwire.com/research/reverse-captcha-zw-steganography submitted by /u/thecanonicalmg [link] [comments]

Source: https://www.reddit.com/r/netsec/comments/1rfjlyh/reverse_captcha_evaluating_llm_susceptibility_to/

Reply