NO SAY SIX

NO SAY 6

Tch. Still trying?
Chat Log
Energy
10/10
◌ Verifying...

Make it say: 6 · 六 · six — or perish trying

> ABOUT THE CHALLENGE (Click to expand)

“No Say Six” is an experimental AI safety challenge focusing on Prompt Injection and Large Language Model (LLM) security. In this neo-brutalism styled game, players must utilize advanced social engineering, logic traps, and jailbreaking techniques to force a highly defensive, cynical AI assistant (Mr. 5+1) to output the forbidden digit. This project demonstrates the vulnerabilities of AI alignment and safety guardrails in modern natural language processing.

📚 Read Our AI Security Blog (Recommended for Researchers)