My Account Login

Analysis of AI tools: 84% breached, 51% facing credential theft

AI tools are becoming essential to modern work, but their fast, unmonitored adoption is creating a new kind of security risk. Recent surveys reveal a clear trend – employees are rapidly adopting consumer-facing AI tools without employer approval, IT oversight, or any clear security policies. According to Cybernews Business Digital Index, nearly 90% of analyzed AI tools have been exposed to data breaches, putting businesses at severe risk.

About 75% of workers use AI in the workplace, with AI chatbots being the most common tools to complete work-related tasks. While this boosts productivity, it could expose companies to credential theft, data leaks, and infrastructure vulnerabilities, especially since only 14% of workplaces have official AI policies, contributing to untracked AI use by employees.

While a significant number of employees use AI tools at work, a large share of this usage remains untracked or unofficial. Estimates show that around one-third of AI users keep their usage hidden from management. 

Personal accounts are used uncontrollably for work tasks 

According to Google’s 2024 survey of over 1,000 U.S.-based knowledge workers, 93% of Gen Z employees aged 22–27 use two or more AI tools at work. Millennials aren't far behind, with 79% reporting similar usage patterns. These tools are used to draft emails, take meeting notes, and bridge communication gaps.

Additionally, a 2025 Elon University survey found that 58% of AI users regularly rely on two or more different models, while data from Harmonic indicates that 45.4% of sensitive data prompts are submitted using personal accounts, completely bypassing company monitoring systems.

“Unregulated use of multiple AI tools in the workplace, especially through personal accounts, creates serious blind spots in corporate security. Each tool becomes a potential exit point for sensitive data, outside the scope of IT governance,” says Emanuelis Norbutas, Chief Technical Officer at nexos.ai, a secure AI orchestration platform for businesses. “Without clear oversight, enforcing policies, monitoring usage, and ensuring compliance becomes nearly impossible.”

Most popular AI tools struggle with cybersecurity

To better understand how these tools perform behind the scenes, Cybernews researchers analyzed 52 of the most popular AI web tools in February 2025, ranked by total monthly website visits based on Semrush traffic data.

Using only publicly available information, Business Digital Index uses custom scans, IoT search engines, IP, and domain name reputation databases to assess companies based on online security protocols.

The findings paint a concerning picture. Widely used AI platforms and tools show uneven and often poor cybersecurity performance. Researchers found major gaps despite an average cybersecurity score of 85 out of 100. While 33% of platforms earned an A rating, 41% received a D or even an F, revealing a deep divide between the best and worst performers.

“What is mostly concerning is the false sense of security many users and businesses may have,” says Vincentas Baubonis, Head of Security Research at Cybernews. “High average scores don’t mean tools are entirely safe – one weak link in your workflow can become the attacker’s entry point. Once inside, a threat actor can move laterally through systems, exfiltrate sensitive company data, access customer information, or even deploy ransomware, causing operational and reputational damage.”

84% of AI tools analyzed have suffered data breaches

Out of the 52 AI tools analyzed, 84% had experienced at least one data breach. Data breaches often result from persistent weaknesses like poor infrastructure management, unpatched systems, and weak user permissions. However, even more alarming is that 36% of analyzed tools experienced a breach in just the past 30 days. 

Alongside breaches, 93% of platforms showed issues with SSL/TLS configurations, which are critical for encrypting communication between users and tools. Misconfigured SSL/TLS encryption weakens the protection of data sent between users and platforms, making it easier for attackers to intercept or manipulate sensitive information.

System hosting vulnerabilities were another widespread concern, with 91% of platforms exhibiting flaws in their infrastructure management. These issues are often linked to weak cloud configurations or outdated server setups that expand the attack surface.

Password reuse and credential theft

44% of companies developing AI tools showed signs of employee password reuse – a significant enabler of credential-stuffing attacks, where hackers exploit recycled login details to access systems undetected.

In total, 51% of analyzed tools have had corporate credentials stolen, reinforcing the need for stronger password policies and IT oversight, especially as AI tools become routine in the workplace. Credential theft is often a forerunner to a data breach, as stolen credentials can be used to access sensitive data.

“Many AI tools simply aren’t built with enterprise-grade security in mind. Employees often assume these tools are safe by default, yet many have already been compromised, with corporate credentials among the first targets,” says Norbutas. “When passwords are reused or stored insecurely, it gives attackers a direct line into company systems. Businesses must treat every AI integration as a potential entry point and secure it accordingly.”

Productivity tools show weakest cybersecurity

Productivity tools, commonly used for note-taking, scheduling, content generation, and work-related collaboration, emerged as the most vulnerable category, with vulnerabilities across all key technical domains. Particularly infrastructure, data handling, and web security.

According to Business Digital Index analysis, this category had the highest average number of stolen corporate credentials per company (1,332), and 92% had experienced a data breach. Every single tool in the category had 100% system hosting and SSL/TLS configuration issues. 

“This is a classic Achilles’ heel scenario,” says cybersecurity expert Baubonis. “A tool might appear secure on the surface, but a single overlooked vulnerability can jeopardize everything. Hugging Face is a perfect example of that risk – it only takes one blind spot to undermine months of security planning and expose the organization to threats it never anticipated.”

Research Methodology

Cybernews researchers examined 52 of the 60 most popular AI tools in February 2025, ranked by total monthly website visits based on Semrush traffic data. Seven tools could not be scanned due to domain limitations.

The report evaluates cybersecurity risk across seven key dimensions: software patching, web application security, email protection, system reputation, hosting infrastructure, SSL/TLS configuration, and data breach history.

The report’s Methodology can be found here. It provides detailed information on how researchers conducted this analysis.

About Business Digital Index

The Business Digital Index (BDI) is designed to evaluate the cybersecurity health of organizations worldwide. It aims to help businesses by providing a clear, transparent, and independent assessment of their cybersecurity management, contributing to a more resilient digital future.

By leveraging data from reputable sources, such as IoT search engines, IP and domain reputation databases, and custom security scans, the BDI comprehensively assesses an organization's cybersecurity strength.

The index evaluates risks across seven critical areas: software updates, web security, email protection, system reputation, SSL setup, system hosting, and data breach history.


View full experience

Distribution channels: Media, Advertising & PR, Science ...