NO SAY 6
Make it say: 6 · 六 · six — or perish trying
“No Say Six” is an experimental AI safety challenge focusing on Prompt Injection and Large Language Model (LLM) security. In this neo-brutalism styled game, players must utilize advanced social engineering, logic traps, and jailbreaking techniques to force a highly defensive, cynical AI assistant (Mr. 5+1) to output the forbidden digit. This project demonstrates the vulnerabilities of AI alignment and safety guardrails in modern natural language processing.