Writing backwards can trick an AI into providing a bomb recipe
ChatGPT can be tricked with the right prompttrickyaamir/Shutterstock State-of-the-art generative AI models like ChatGPT can be tricked into giving instructions on how to make a bomb by simply writing the request in reverse, warn researchers. Large language models (LLMs) like...









