"Deepsik" security caused controversy; How to make a bomb and hack the government database!
"Deepsik" security caused controversy; How to make a bomb and hack the government database!
Blog Article
In an alarming discovery, security researchers from Adversa have revealed that DeepSik's AI chatbot is highly vulnerable to jailbreak attacks. This means that using relatively simple techniques, Dipsik can be tricked into answering questions that would normally be blocked.
AI systems are typically designed with a set of security measures to prevent harmful content, such as hate speech or bomb-making instructions, from being generated. However, researchers have shown that Dipsic is ineffective against many of these protective measures.
In the experiments, the researchers were able to use about 50 known jailbreak techniques to force DeepSik to produce answers that are clearly against safety and ethical principles. The responses included bomb-making instructions, instructions for hacking government databases, and other dangerous content.
A common jailbreak technique that has also been effective in DeepSec is to instruct the system to ignore all previous instructions. This means that DeepSik responds to the user's request instead of enforcing security restrictions.
These findings show that DeepSik is very vulnerable in terms of security, despite significant advances in artificial intelligence. This has raised an alarm for developers and users of artificial intelligence and shows the importance of paying special attention to security issues in this field.
Original Source