ForgeIQ Logo

AI Under Siege: Security Flaws Surface as Tech Titans Race for Supremacy

Nov 11, 2025AI Security Concerns
Featured image for the news article

As the competition heats up among AI companies, critical security lapses are coming to light, raising concerns about the rapid, sometimes reckless pace of innovation. A recent report from Wiz, a cybersecurity firm, underscores a shocking trend: a staggering 65% of the top 50 AI firms analyzed had leaked sensitive data on GitHub. This includes API keys, tokens, and other crucial credentials that were often lost in code repositories, areas that standard security tools fail to monitor adequately.

Glyn Morgan, Country Manager for the UK&I at Salt Security, aptly describes these oversights as basic yet preventable errors. “When AI firms expose their API keys, they reveal an avoidable security flaw,” he explains. It’s a classic mix-up of governance intertwined with security mishaps, which OWASP—a widely recognized open-source project concerned with web application security—has flagged as significant risks. By embedding sensitive credentials within their code files, these companies essentially hand attackers the keys to their systems.

The repercussions of these lapses extend beyond individual firms. With an increasing number of enterprises partnering with AI startups, they may inadvertently inherit vulnerabilities from their less-secure partners. Wiz warns that some of the leaks identified could potentially expose everything from sensitive organizational structures to private model information.

Consider this: the companies subjected to these security audits have a collective valuation exceeding $400 billion. A dollar figure that underscores the high stakes involved in AI development. The report documents particular instances of negligence. For example:

  • LangChain was found to have multiple Langsmith API keys exposed, some of which had permissions to manage the organization—an incredibly valuable resource for any attacker.
  • An enterprise-level API key for ElevenLabs was discovered casually resting in a plaintext file.
  • A confidential HuggingFace token from an unnamed AI company was also found exposed in a deleted code fork, providing access to about 1,000 private models.

The report suggests that this rampant vulnerability stems from traditional security scanning practices becoming grossly inadequate for the complexities of today’s AI landscape. These basic scans often fail to detect severe threats, akin to seeing only the tip of an iceberg while ignoring its larger mass lurking beneath the surface. To combat this, Wiz implemented a more thorough scanning methodology dubbed “Depth, Perimeter, and Coverage.”

  • Depth: This extensive scan scrutinizes everything from full commit histories to deleted forks and gists—areas that most scanners neglect.
  • Perimeter: This aspect extends beyond the main company organization to include members and contributors, as they might unknowingly check in sensitive company-related information into public repositories.
  • Coverage: The scanning specifically seeks out new types of AI-related secrets—like keys for platforms that traditional scanners frequently overlook.

This expanded attack surface presents significant risks, especially considering the immaturity of security practices at many fast-paced startups. Alarmingly, nearly half of the firms that were notified about leaks didn’t respond or lacked a formal disclosure channel to address the alerts adequately. The call to action for enterprise technology leaders is clear: firms need to recalibrate their security focus on both internal and external risks.

  1. Security leaders should view their employees as part of the attack surface. A Version Control System (VCS) member policy should be enforced during onboarding, encouraging practices like multi-factor authentication for personal accounts and niche separation between personal and professional accounts.
  2. Internal secret scanning should evolve beyond mere repository checks. Public VCS secret scanning should become non-negotiable, adopting the depth-oriented methodology to uncover hidden threats.
  3. This scrutiny must also broaden to the entire AI supply chain. For Chief Information Security Officers (CISOs), examining AI vendors' secrets management and vulnerability disclosure practices should be paramount.

Ultimately, the rapid development of tools and platforms fueling the next generation of technology is outpacing security governance—leading to inevitable vulnerabilities. As Wiz puts it, “For AI innovators, the message is clear: speed cannot compromise security.” This caution similarly applies to enterprises that depend on their innovation.

In this truly transformative era for AI, it’s essential for both developers and users to keep security at the forefront. After all, what good is incredible innovation if the foundations on which it’s built are riddled with cracks?

Latest Related News