Physical Address

Lesya Kurbasa 7B
03194 Kyiv, Kyivska obl, Ukraine

Slopsquatting: How AI Hallucinations Create New Supply Chain Attacks

Slopsquatting is an emerging type of software supply chain attack that leverages AI hallucinations to compromise development environments. This attack vector exploits the tendency of large language models (LLMs) to recommend non-existent package names, which attackers then register and weaponize. As AI-assisted coding becomes increasingly prevalent, this sophisticated threat poses significant risks to developers and organizations relying on AI tools like GitHub Copilot, ChatGPT, and Cursor for code generation and dependency management[1].

Key Facts

  • Threat Name: Slopsquatting (AI Package Hallucination Attacks)
  • Type: Software supply chain attack, social engineering
  • Vector: AI-generated hallucinated package names
  • Primary Targets: Developers using AI coding assistants, CI/CD pipelines
  • Affected Ecosystems: npm (JavaScript), PyPI (Python), Maven, Go, RubyGems
  • Discovery: Term coined by PSF Developer-in-Residence Seth Larson, research by University of Texas at San Antonio, Virginia Tech, and University of Oklahoma
  • Risk Level: High (due to increasing adoption of AI coding assistants)
  • Primary Impact: Malicious code execution, credential theft, data exfiltration
  • Related Attacks: Typosquatting, dependency confusion

Impact and Risks

Slopsquatting attacks pose significant security risks to both individual developers and organizations:

  • Code Execution: Malicious packages can execute arbitrary code during installation or import
  • Credential Theft: Packages may harvest API keys, tokens, and credentials from development environments
  • Data Exfiltration: Sensitive information can be transmitted to attacker-controlled servers
  • Persistent Access: Compromised packages can establish backdoors for future exploitation, similar to backdoor malware techniques
  • Lateral Movement: Attackers can spread through CI/CD pipelines to production environments
  • Reputational Damage: Organizations may face decreased trust after security incidents
  • Business Disruption: Remediation efforts can interrupt development processes and workflows

What is Slopsquatting?

Slopsquatting is a sophisticated supply chain attack that exploits AI hallucinations in coding assistants. The term, coined by Seth Larson (Python Software Foundation Developer-in-Residence) and popularized by Andrew Nesbitt (Ecosyste.ms creator)[1], refers to attackers registering package names that don’t actually exist but are frequently hallucinated by AI tools like ChatGPT, GitHub Copilot, and other code generation models.

Unlike traditional typosquatting that relies on human typing errors, slopsquatting capitalizes on AI mistakes. When developers request code examples from AI assistants, these tools sometimes recommend non-existent packages that sound plausible. If a developer trusts this output without verification and attempts to install the hallucinated package, they may unwittingly download malicious code registered by an attacker who anticipated the AI’s hallucination.

Slopsquatting Attack Chain 1. Attacker Research Observe AI hallucinations by repeatedly prompting LLMs for code examples with dependencies 2. Package Registration Register hallucinated package names with malicious code in public repositories 3. AI Code Generation LLM generates code with hallucinated package name 4. Developer Prompt Developer asks AI assistant to generate code for a specific functionality 5. Implement & Install Developer implements the code and installs the hallucinated package without verification 6. System Compromise Malicious package executes code, steals credentials, or establishes persistent access

Source: Technical analysis of slopsquatting attack methodology based on academic research by Spracklen et al., arXiv 2025

Research Findings

A comprehensive study published in 2025 by researchers from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma provides the first large-scale analysis of package hallucinations by code-generating LLMs. The paper, titled “We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs,” revealed alarming statistics about the prevalence of this vulnerability[1].

Key Research Findings:

  • Widespread Hallucinations: 19.7% of all AI-recommended packages didn’t exist[1]
  • Model Comparison: Open source models hallucinated at 21.7% rate compared to commercial models at 5.2%
  • Worst Offenders: CodeLlama 7B and 34B hallucinated in over 1/3 of outputs
  • Best Performance: GPT-4 Turbo had the lowest hallucination rate at 3.59%
  • Scale: Researchers observed over 205,000 unique hallucinated package names across all models
  • Consistency: 43% of hallucinated packages appeared in every test iteration, making them predictable targets[1]
  • Temperature Effect: Higher temperature settings significantly increased hallucination rates
  • Cross-Language Confusion: 8.7% of hallucinated Python packages were valid npm packages, suggesting cross-ecosystem confusion
Package Hallucination Rates by Model Type 35% 30% 20% 10% 0% Average All Models 19.7% Commercial Models 5.2% Open Source Models 21.7% Worst Models (CodeLlama) 33.5% Data from “We Have a Package for You!”, Spracklen et al., arXiv 2025

Slopsquatting vs. Traditional Typosquatting

While slopsquatting shares similarities with traditional typosquatting, it represents a fundamentally different attack vector with unique characteristics:

Characteristic Traditional Typosquatting Slopsquatting
Target Weakness Human typing errors AI hallucinations
Attack Predictability Based on common mistyping patterns Based on LLM training patterns and hallucinations
Effectiveness Decreasing due to autocomplete features Increasing with AI adoption in development
Preventability Typing carefully, using autocomplete Verifying packages with external sources
CAPEC Reference CAPEC-630: TypoSquatting Not yet classified (emerging threat)
Scale of Impact Individual packages with similar names Widely recommended hallucinated names

Real-World Examples

While researchers have documented the phenomenon, they deliberately chose not to register any hallucinated package names to avoid potential misuse. However, malicious actors are already exploiting this vulnerability. Recent examples of similar supply chain attacks demonstrate how attackers are using package repositories as attack vectors:

  • Solana Cryptocurrency Exfiltration: Attackers published npm packages targeting Solana developers to steal private keys, using Gmail as an exfiltration channel. While not specifically slopsquatting, this illustrates the potential impact of malicious packages in development environments. Similar techniques have been observed in JavaScript malware campaigns.
  • Cross-Language Confusion Exploitation: Attackers registered Python packages with the same names as popular JavaScript packages, knowing AI tools often confuse the two ecosystems.
  • Temperature Manipulation: Security researchers discovered that by prompting models with higher temperature settings (more randomness), they could trigger more hallucinations, which attackers could potentially exploit.

The “Vibe Coding” Risk Multiplier

The risk of slopsquatting is significantly amplified by the rise of “vibe coding” – a term coined by Andrej Karpathy to describe a development workflow where developers describe what they want to build in natural language, and AI tools generate the implementation[1]. In this workflow, developers transition from writing code to curating AI-generated code, often with less scrutiny of individual components like package dependencies.

In traditional development, developers explicitly choose and install packages, often after researching them. In vibe coding workflows, packages may be suggested and installed as part of the AI-guided implementation process, with developers less likely to verify each dependency. This creates a perfect environment for slopsquatting attacks to succeed.

Detection and Prevention

Protecting against slopsquatting attacks requires a multi-layered approach that combines developer awareness, tooling improvements, and organizational policies:

For Individual Developers:

  • Verify Package Existence: Before installing any AI-recommended package, verify it exists in the official repository and has an established history
  • Check Documentation: Look for official documentation for unfamiliar packages
  • Inspect Package Stats: Review download counts, contributor activity, and creation dates
  • Use Package Lockfiles: Implement lockfiles (package-lock.json, requirements.txt with pinned versions) to prevent unexpected package installations
  • Request Package Verification: Ask the AI to verify if it’s sure about the package recommendation

For Organizations:

  • Implement Private Registries: Use private package mirrors that only allow vetted packages
  • Deploy Supply Chain Security Tools: Implement tools like Socket, npm audit, or PyUp to scan for suspicious packages
  • Establish Approval Workflows: Require new dependencies to go through security review before production use
  • Train Developers: Educate teams about slopsquatting risks and proper package verification, similar to awareness training for hidden malware in developer tools
  • Monitor for New Dependencies: Implement alerting for new dependencies added to projects

For AI Tool Providers:

  • Implement Package Verification: Add real-time verification of recommended packages against package repositories
  • Add Warning Messages: Include warnings about unverified packages in generated code
  • Improve Training: Fine-tune models to reduce hallucination rates for package names
  • Self-Verification: Enable models to detect and correct their own hallucinations
  • Limit Package Vocabulary: Restrict package recommendations to a verified list of known-good packages
Supply Chain Attacks by Vector (2020-2025) Percentage growth relative to 2020 baseline 0% 100% 200% 300% 400% 2020 2021 2022 2023 2024 2025 Traditional Typosquatting Dependency Confusion AI-Related Supply Chain Attacks 395% growth 263% growth First documented slopsquatting cases

Source: Analysis of supply chain attack data from ENISA Threat Landscape 2025, GitGuardian State of Secrets Sprawl 2025, and Socket Security’s OSS Threat Report

Conclusion

Slopsquatting represents a new frontier in supply chain security threats, specifically targeting the growing integration of AI into development workflows. As AI coding assistants become more prevalent, the potential impact of these attacks will likely increase unless developers, organizations, and AI providers implement proper safeguards.

The research findings highlight the systemic nature of this problem – with nearly 20% of all package recommendations being hallucinations, and some models approaching a 35% hallucination rate. Most concerning is the finding that 43% of hallucinations are highly consistent, making them predictable targets for attackers.

This threat underscores the importance of maintaining security vigilance even when using advanced AI tools. While AI coding assistants offer tremendous productivity benefits, they also introduce new attack surfaces that must be secured through awareness, verification processes, and improved tooling. As development workflows continue to evolve with AI integration, security practices must adapt to address these new challenges.

For a broader understanding of modern supply chain threats, see our analyses of malicious code behaviors and compromised development tools.

Frequently Asked Questions

How can I determine if a package recommended by an AI assistant is legitimate?

Before installing any package recommended by AI, verify its legitimacy by: checking if it exists in the official package repository (npm, PyPI, etc.), reviewing its download statistics and user count, examining when it was first published and its version history, looking for official documentation and a legitimate GitHub repository, and checking if it’s maintained by a reputable developer or organization. Be especially cautious of packages with low download counts, recent creation dates, or minimal documentation.

Which AI coding assistants are most vulnerable to generating hallucinated package names?

According to research, open-source models like CodeLlama (7B and 34B versions) showed the highest hallucination rates, with over 33% of recommendations being non-existent packages. Commercial models typically performed better, with GPT-4 Turbo demonstrating the lowest hallucination rate at 3.59%. Generally, smaller models, models with higher temperature settings (more randomness), and models that suggest a wider variety of packages are more prone to hallucinations. It’s important to verify package recommendations regardless of which AI assistant you’re using.

How do slopsquatting attacks differ from traditional typosquatting?

While both are forms of supply chain attacks targeting package names, they differ in fundamental ways. Traditional typosquatting exploits human typing errors (like installing “lodahs” instead of “lodash”), relying on predictable misspelling patterns. Slopsquatting, however, exploits AI hallucinations – package names that AI systems confidently but incorrectly generate, which developers then trust and install. Typosquatting is becoming less effective due to autocomplete and development tools, while slopsquatting is an emerging threat specifically targeting AI-assisted development workflows. The impact of slopsquatting could potentially be greater as AI assistants might recommend the same hallucinated package to many developers.

Can organizational package policies prevent slopsquatting attacks?

Yes, organizations can significantly mitigate slopsquatting risks by implementing package policies like: using private package registries that only mirror verified packages, requiring approval workflows for new dependencies, implementing automated scanning of dependencies for suspicious packages, enforcing the use of package lockfiles with exact versions, and conducting regular audits of project dependencies. Tools that provide real-time monitoring of package installations, such as Socket, can also help detect potentially malicious packages before they’re widely used in an organization.

What should I do if I suspect I’ve installed a slopsquatting package?

If you suspect you’ve installed a malicious package through slopsquatting: immediately remove the package from your project, scan your system for any persistent threats or backdoors, rotate any credentials that might have been exposed (API keys, tokens, passwords), check your project for any modifications that might have been made by the malicious package, review your repository commit history for any suspicious changes, and report the package to the relevant repository administrators. Also examine your network logs for any unusual outbound connections that might indicate data exfiltration. For organizations, follow your security incident response process and consider engaging a security team to assess the full impact.

References

  1. Gooding, S. (2025, April 8). The Rise of Slopsquatting: How AI Hallucinations Are Fueling a New Class of Supply Chain Attacks. Socket Blog.
Gridinsoft Team
Gridinsoft Team

Founded in 2003, GridinSoft LLC is a Kyiv, Ukraine-based cybersecurity company committed to safeguarding users from the ever-growing threats in the digital landscape. With over two decades of experience, we have earned a reputation as a trusted provider of innovative security solutions, protecting millions of users worldwide.

Articles: 137

Leave a Reply

Your email address will not be published. Required fields are marked *