DeepSeek AI’s Unsecured Database Leaks Over 1 Million User Records

Sunita Somvanshi

DeepSeek AI's Unsecured Database Leaks Over 1 Million User Records

Chinese AI company DeepSeek left its database exposed and unsecured, revealing over 1 million sensitive records including chat logs and internal information. The company’s critical oversight? Running a database without any authentication measures, allowing open access to anyone.

Wiz Research, a cybersecurity firm in New York, spotted this security hole while checking DeepSeek’s online systems. Ami Luttwak, the firm’s technology chief, put it bluntly: “This was so simple to find we believe we’re not the only ones who found it.”

The exposed information included chat logs between users and DeepSeek’s AI assistant. It also contained API keys – digital access credentials that could grant access to DeepSeek’s internal systems. The database was discovered unsecured on January 29, 2025, with logs dating back to January 6.

DeepSeek fixed the problem within an hour after Wiz Research warned them. But that quick response might not prevent potential misuse. The exposed data could enable phishing attacks, credential theft, and corporate espionage, according to the security researchers.

The timing couldn’t be worse for DeepSeek. The company had just celebrated beating ChatGPT in App Store downloads. Their success worried U.S. tech giants because DeepSeek offered similar AI features at lower prices than companies like Microsoft and Nvidia.


Similar Posts


Italy’s privacy regulator has ordered DeepSeek to be blocked to protect user data. Ireland’s Data Protection Commission has requested information from DeepSeek. The company has limited new user registrations while they work on addressing the security lapse.

This breach matters because AI systems process vast amounts of user interactions. People share information and conversations with these AI assistants. DeepSeek’s leak shows how this data can become exposed without proper security measures.

\For AI companies rushing to release new features, this serves as a warning. Basic security measures, like proper authentication for databases, can’t be ignored. For users, it’s a reminder to be mindful about what they share with AI systems – as data security depends on the companies’ protective measures.

The incident exposes a growing challenge in AI development: companies often prioritize rapid advancement over security considerations. As AI services become more integrated into business operations and daily tasks, protecting user data becomes increasingly important. DeepSeek’s database exposure highlights the critical need for robust security in AI systems that handle user information

Leave a comment