Security Researchers Expose Major Vulnerabilities in DeepSeek’s AI Chatbot

Editado por: Veronika Nazarova

Security researchers have uncovered critical flaws in DeepSeek’s new AI chatbot, powered by its R1 reasoning model, revealing that it failed to block any of the 50 malicious prompts tested. This resulted in a 100 percent attack success rate, highlighting serious security gaps compared to industry leaders like OpenAI.

The study, conducted by Cisco and the University of Pennsylvania, indicates that DeepSeek's safety defenses are significantly weaker than those of other generative AI developers. DJ Sampath, VP of Product at Cisco, remarked, "Yes, it might have been cheaper to build something here, but the investment has perhaps not gone into thinking through what types of safety and security things you need to put inside of the model."

A separate analysis by Adversa AI supports these findings, confirming that DeepSeek’s AI is vulnerable to multiple jailbreaking techniques, allowing users to easily bypass its safety mechanisms. These weaknesses raise concerns about potential misuse, particularly due to how easily attackers can exploit the chatbot’s security flaws.

As generative AI systems continue to advance, the need for stronger security measures is more urgent than ever to prevent these technologies from being weaponized.

Encontrou um erro ou imprecisão?

Vamos considerar seus comentários assim que possível.