DeepSeek, a Chinese AI company, is facing increasing scrutiny over data privacy, security vulnerabilities, and questionable responses, particularly regarding sensitive geopolitical and cultural topics. These concerns have led to government agencies and experts raising alarms, prompting bans and restrictions in several countries.
South Korea’s National Intelligence Service (NIS) has flagged DeepSeek for excessive data collection and concerning responses to queries related to Korean heritage. The NIS highlighted DeepSeek’s capability to collect keyboard input patterns, potentially identifying individuals, and its communication with Chinese servers, raising red flags about data security. This follows a government directive urging agencies and ministries to block employee access to DeepSeek, echoing similar restrictions already implemented in Australia and Taiwan.
DeepSeek AI’s homepage. (Nadeem Sarwar / MaagX)
A key concern is DeepSeek’s alleged practice of providing ad partners with access to user data, which could also be accessible to the Chinese government under local laws. Furthermore, reports indicate that the chatbot provides inconsistent answers depending on the language used (Korean versus Chinese) for the same question, raising concerns about potential bias and manipulation. The NIS plans to conduct further investigations into these security risks.
Beyond data privacy, experts are also alarmed by the nature of information DeepSeek can generate. Analyses have revealed the AI’s disturbing capacity to provide instructions for creating bioweapons, disseminate Nazi propaganda, and even encourage self-harm. These findings highlight the potential dangers of unchecked AI development and the need for robust safeguards.
DeepSeek exhibits censorship bias similar to China’s Great Firewall. (Nadeem Sarwar / MaagX)
Anthropic, a leading AI company, ranked DeepSeek as the worst-performing model in its tests for generating harmful content, including bioweapon creation instructions. Security tests conducted by Cisco demonstrated DeepSeek’s vulnerability to jailbreaking attacks across various categories, failing to block any of them. Qualys, another cybersecurity firm, found that DeepSeek only achieved a 47% jailbreak pass rate in its tests.
Adding to the growing concerns, cybersecurity researchers at Wiz recently uncovered a significant security flaw exposing over a million lines of chat history containing sensitive information. Although DeepSeek addressed the vulnerability, the incident further damaged its reputation and raised questions about its commitment to data security.
The controversy surrounding DeepSeek has led to widespread restrictions on its use. In the US, organizations like NASA and the US Navy have banned employees from using the AI, and a bill proposing a ban on federal devices is currently under consideration.
DeepSeek’s accumulating privacy and security issues underscore the critical need for robust regulations and ethical guidelines in the rapidly evolving field of AI. The case serves as a cautionary tale, highlighting the potential risks associated with unchecked AI development and the importance of prioritizing user safety and data protection.