
OpenAI's latest model, GPT-5, has been out less than a week and it's already causing concerns among security experts, reports Ernestas Naprys at Cybernews. "Several security teams," he writes, "managed to jailbreak GPT-5 in less than 24 hours after its release."
Naprys goes on to detail how GPT-5 performs on various security assessments, trailing far behind GPT-4o, one of OpenAI's previous models.
If you're thinking about using the new model in your business, you might want to read Naprys' article (linked above) first.