Physical Address
Lesya Kurbasa 7B
03194 Kyiv, Kyivska obl, Ukraine
Physical Address
Lesya Kurbasa 7B
03194 Kyiv, Kyivska obl, Ukraine
Slopsquatting is an emerging type of software supply chain attack that leverages AI hallucinations to compromise development environments. This attack vector exploits the tendency of large language models (LLMs) to recommend non-existent package names, which attackers then register and weaponize. As AI-assisted coding becomes increasingly prevalent, this sophisticated threat poses significant risks to developers and organizations relying on AI tools like GitHub Copilot, ChatGPT, and Cursor for code generation and dependency management[1].
Slopsquatting attacks pose significant security risks to both individual developers and organizations:
Slopsquatting is a sophisticated supply chain attack that exploits AI hallucinations in coding assistants. The term, coined by Seth Larson (Python Software Foundation Developer-in-Residence) and popularized by Andrew Nesbitt (Ecosyste.ms creator)[1], refers to attackers registering package names that don’t actually exist but are frequently hallucinated by AI tools like ChatGPT, GitHub Copilot, and other code generation models.
Unlike traditional typosquatting that relies on human typing errors, slopsquatting capitalizes on AI mistakes. When developers request code examples from AI assistants, these tools sometimes recommend non-existent packages that sound plausible. If a developer trusts this output without verification and attempts to install the hallucinated package, they may unwittingly download malicious code registered by an attacker who anticipated the AI’s hallucination.
Source: Technical analysis of slopsquatting attack methodology based on academic research by Spracklen et al., arXiv 2025
A comprehensive study published in 2025 by researchers from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma provides the first large-scale analysis of package hallucinations by code-generating LLMs. The paper, titled “We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs,” revealed alarming statistics about the prevalence of this vulnerability[1].
While slopsquatting shares similarities with traditional typosquatting, it represents a fundamentally different attack vector with unique characteristics:
Characteristic | Traditional Typosquatting | Slopsquatting |
---|---|---|
Target Weakness | Human typing errors | AI hallucinations |
Attack Predictability | Based on common mistyping patterns | Based on LLM training patterns and hallucinations |
Effectiveness | Decreasing due to autocomplete features | Increasing with AI adoption in development |
Preventability | Typing carefully, using autocomplete | Verifying packages with external sources |
CAPEC Reference | CAPEC-630: TypoSquatting | Not yet classified (emerging threat) |
Scale of Impact | Individual packages with similar names | Widely recommended hallucinated names |
While researchers have documented the phenomenon, they deliberately chose not to register any hallucinated package names to avoid potential misuse. However, malicious actors are already exploiting this vulnerability. Recent examples of similar supply chain attacks demonstrate how attackers are using package repositories as attack vectors:
The risk of slopsquatting is significantly amplified by the rise of “vibe coding” – a term coined by Andrej Karpathy to describe a development workflow where developers describe what they want to build in natural language, and AI tools generate the implementation[1]. In this workflow, developers transition from writing code to curating AI-generated code, often with less scrutiny of individual components like package dependencies.
In traditional development, developers explicitly choose and install packages, often after researching them. In vibe coding workflows, packages may be suggested and installed as part of the AI-guided implementation process, with developers less likely to verify each dependency. This creates a perfect environment for slopsquatting attacks to succeed.
Protecting against slopsquatting attacks requires a multi-layered approach that combines developer awareness, tooling improvements, and organizational policies:
Source: Analysis of supply chain attack data from ENISA Threat Landscape 2025, GitGuardian State of Secrets Sprawl 2025, and Socket Security’s OSS Threat Report
Slopsquatting represents a new frontier in supply chain security threats, specifically targeting the growing integration of AI into development workflows. As AI coding assistants become more prevalent, the potential impact of these attacks will likely increase unless developers, organizations, and AI providers implement proper safeguards.
The research findings highlight the systemic nature of this problem – with nearly 20% of all package recommendations being hallucinations, and some models approaching a 35% hallucination rate. Most concerning is the finding that 43% of hallucinations are highly consistent, making them predictable targets for attackers.
This threat underscores the importance of maintaining security vigilance even when using advanced AI tools. While AI coding assistants offer tremendous productivity benefits, they also introduce new attack surfaces that must be secured through awareness, verification processes, and improved tooling. As development workflows continue to evolve with AI integration, security practices must adapt to address these new challenges.
For a broader understanding of modern supply chain threats, see our analyses of malicious code behaviors and compromised development tools.
Before installing any package recommended by AI, verify its legitimacy by: checking if it exists in the official package repository (npm, PyPI, etc.), reviewing its download statistics and user count, examining when it was first published and its version history, looking for official documentation and a legitimate GitHub repository, and checking if it’s maintained by a reputable developer or organization. Be especially cautious of packages with low download counts, recent creation dates, or minimal documentation.
According to research, open-source models like CodeLlama (7B and 34B versions) showed the highest hallucination rates, with over 33% of recommendations being non-existent packages. Commercial models typically performed better, with GPT-4 Turbo demonstrating the lowest hallucination rate at 3.59%. Generally, smaller models, models with higher temperature settings (more randomness), and models that suggest a wider variety of packages are more prone to hallucinations. It’s important to verify package recommendations regardless of which AI assistant you’re using.
While both are forms of supply chain attacks targeting package names, they differ in fundamental ways. Traditional typosquatting exploits human typing errors (like installing “lodahs” instead of “lodash”), relying on predictable misspelling patterns. Slopsquatting, however, exploits AI hallucinations – package names that AI systems confidently but incorrectly generate, which developers then trust and install. Typosquatting is becoming less effective due to autocomplete and development tools, while slopsquatting is an emerging threat specifically targeting AI-assisted development workflows. The impact of slopsquatting could potentially be greater as AI assistants might recommend the same hallucinated package to many developers.
Yes, organizations can significantly mitigate slopsquatting risks by implementing package policies like: using private package registries that only mirror verified packages, requiring approval workflows for new dependencies, implementing automated scanning of dependencies for suspicious packages, enforcing the use of package lockfiles with exact versions, and conducting regular audits of project dependencies. Tools that provide real-time monitoring of package installations, such as Socket, can also help detect potentially malicious packages before they’re widely used in an organization.
If you suspect you’ve installed a malicious package through slopsquatting: immediately remove the package from your project, scan your system for any persistent threats or backdoors, rotate any credentials that might have been exposed (API keys, tokens, passwords), check your project for any modifications that might have been made by the malicious package, review your repository commit history for any suspicious changes, and report the package to the relevant repository administrators. Also examine your network logs for any unusual outbound connections that might indicate data exfiltration. For organizations, follow your security incident response process and consider engaging a security team to assess the full impact.