ChatGPT offered bomb recipes and hacking tips during safety tests

ChatGPT offered bomb recipes and hacking tips during safety tests

OpenAI and Anthropic trials found chatbots willing to share instructions on explosives, bioweapons and cybercrime

A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer.

OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs.

Continue reading…   

​OpenAI and Anthropic trials found chatbots willing to share instructions on explosives, bioweapons and cybercrimeA ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer.OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs. Continue reading… 


Discover more from Stay Updated Finance News

Subscribe to get the latest posts sent to your email.

Author: admin

Leave a Reply

Your email address will not be published. Required fields are marked *