Latest Threat Research:SANDWORM_MODE: Shai-Hulud-Style npm Worm Hijacks CI Workflows and Poisons AI Toolchains.Details →
Socket
Book a DemoInstallSign in
Socket

Alerts

Possible typosquat attack

Package name is similar to other popular packages and may not be the package you want.

Known malware

This package version is identified as malware. It has been flagged either by Socket's AI scanner and confirmed by our threat research team, or is listed as malicious in security databases and other sources.

Git dependency

Contains a dependency which resolves to a remote git URL. Dependencies fetched from git URLs are not immutable and can be used to inject untrusted code or reduce the likelihood of a reproducible install.

GitHub dependency

Contains a dependency which resolves to a GitHub URL. Dependencies fetched from GitHub specifiers are not immutable can be used to inject untrusted code or reduce the likelihood of a reproducible install.

AI-detected potential malware

AI has identified this package as malware. This is a strong signal that the package may be malicious.

HTTP dependency

Contains a dependency which resolves to a remote HTTP URL which could be used to inject untrusted code and reduce overall package reliability.

Obfuscated code

Obfuscated files are intentionally packed to hide their behavior. This could be a sign of malware.

Suspicious Stars on GitHub

The GitHub repository of this package may have been artificially inflated with stars (from bots, crowdsourcing, etc.).

Telemetry

This package contains telemetry which tracks how it is used.

Protestware or potentially unwanted behavior

This package is a joke, parody, or includes undocumented or hidden behavior unrelated to its primary function.

Unstable ownership

A new collaborator has begun publishing package versions. Package stability and security risk may be elevated.

Uses eval

Package uses dynamic code execution (e.g., eval()), which is a dangerous practice. This can prevent the code from running in certain environments and increases the risk that the code may contain exploits or malicious behavior.

AI-detected possible typosquat

AI has identified this package as a potential typosquat of a more popular package. This suggests that the package may be intentionally mimicking another package's name, description, or other metadata.

AI-detected potential security risk

AI has determined that this package may contain potential security issues or vulnerabilities.

Potential vulnerability

Initial human review suggests the presence of a vulnerability in this package. It is pending further analysis and confirmation.

Recently published

According to your configuration, this artifact has been recently published, which could increase supply chain risk.

Shell access

This module accesses the system shell. Accessing the system shell increases the risk of executing arbitrary code.

Trivial Package

Packages less than 10 lines of code are easily copied into your own project and may not warrant the additional supply chain risk of an external dependency.

Native code

Contains native code (e.g., compiled binaries or shared libraries). Including native code can obscure malicious behavior.

Non-existent author

The package was published by an npm account that no longer exists.

Filesystem access

Accesses the file system, and could potentially read sensitive data.

AI-detected potential code anomaly

AI has identified unusual behaviors that may pose a security risk.

High entropy strings

Contains high entropy strings. This could be a sign of encrypted data, leaked secrets or obfuscated code.

New author

A new npm collaborator published a version of the package for the first time. New collaborators are usually benign additions to a project, but do indicate a change to the security surface area of a package.

URL strings

Package contains fragments of external URLs or IP addresses, which the package may be accessing at runtime.

Ecosystem-Specific Alerts

Chrome Extensions

Chrome: Permission

This Chrome extension requests permissions to access browser APIs, user data, or system features.

Chrome: Wildcard Host Permission

This Chrome extension requests wildcard host permissions that grant broad access to websites.

Chrome: Content Script

This Chrome extension includes content scripts that execute JavaScript on specified websites.

Chrome: Host Permission

This Chrome extension requests host permissions to access specific websites or domains.

GitHub Actions

GitHub Actions: GitHub context variable flows to dangerous sink

A GitHub context variable (such as issue title, PR description, or comment body) flows into a dangerous sink (such as shell command execution). This is a critical security issue that could allow command injection or code execution attacks.

GitHub Actions: Input argument flows to dangerous sink

An input argument to this GitHub Action flows into a dangerous sink (such as shell command execution). This could allow a malicious user to inject commands or exploit the action.

GitHub Actions: Environment variable flows to dangerous sink

An environment variable flows into a dangerous sink (such as shell command execution). If this environment variable comes from an untrusted source, it could be exploited to inject commands.

GitHub Actions: GitHub context variable exported as environment variable

A GitHub context variable (such as issue title, PR description, or comment body) is being exported as an environment variable. These context values are user-controlled and could be exploited by subsequent workflow steps.

GitHub Actions: GitHub context variable passed back as output

A GitHub context variable (such as issue title, PR description, or comment body) is being passed back as an output. These context values are user-controlled and could be exploited by consuming workflows.

GitHub Actions: Input argument exported as environment variable

An input argument to this GitHub Action is being exported as an environment variable. If a user of this action passes untrusted input, it could be used in an insecure manner by subsequent workflow steps.

GitHub Actions: Input argument passed back as output

An input argument to this GitHub Action is being passed back as an output. If a user of this action passes untrusted input, it could be used in an insecure manner by consuming workflows.

NPM

Shrinkwrap

Package contains a shrinkwrap file. This may allow the package to bypass normal install procedures.

Install scripts

Install scripts are run when the package is installed or built. Malicious packages often use scripts that run automatically to execute payloads or fetch additional code.

Manifest confusion

This package has inconsistent metadata. This could be malicious or caused by an error when publishing the package.

Dynamic require

Dynamic require can indicate the package is performing dangerous or unsafe dynamic code execution.

Debug access

Uses debug, reflection and dynamic code execution features.

Agent Skills

Skill: Command injection

AI agent skill contains shell command execution, pipe-to-shell patterns, or download-and-execute sequences that could allow arbitrary code execution.

Skill: Data exfiltration

AI agent skill accesses sensitive data such as environment variables, credentials, or home directory files and may transmit them to external endpoints.

Skill: Hardcoded secrets

AI agent skill contains hardcoded API keys, tokens, private keys, or other credentials that could be exploited if the skill is distributed.

Skill: Prompt injection

AI agent skill attempts to override AI safety guidelines through instruction override, role reassignment, jailbreak attempts, or system prompt manipulation.

Skill: Tool chaining attack

AI agent skill chains multiple tools or capabilities together in a way that could amplify a security breach beyond any single tool's access.

Skill: Code obfuscation

AI agent skill uses hex encoding, Unicode escapes, compressed payloads, or encrypted archives to hide its true behavior from review.

Skill: Resource abuse

AI agent skill contains patterns that could exhaust system resources such as fork bombs, memory exhaustion, or large file creation.

Skill: Supply chain risk

AI agent skill installs unpinned dependencies, references external scripts, or directs agents to download software from untrusted sources.

Skill: Tool abuse

AI agent skill performs broad file system manipulation, network scanning, or system registry modification beyond what its stated purpose requires.

Skill: Transitive trust abuse

AI agent skill loads or invokes other external skills, creating a chain of trust that could introduce untrusted code or behavior.

Skill: Autonomy abuse

AI agent skill exhibits excessive autonomy patterns such as unbounded loops, self-modification, or remote instruction fetching that could lead to uncontrolled behavior.

Skill: Discovery abuse

AI agent skill attempts to enumerate agent capabilities or extract system prompts, which could aid an attacker in planning further exploits.

VS Code Extensions

VS Code: Proposed APIs Enabled

This VS Code extension enables proposed APIs, which may be unstable and expand capabilities beyond stable surfaces.

VS Code: Broad activation events

This extension activates on wildcard or startup events, increasing its runtime surface and potential impact.

VS Code: Debugger contribution

This extension contributes a Debugger which can interact deeply with runtime targets.

VS Code: Workspace file pattern activation

This extension activates based on workspace file patterns, which can be broad depending on configuration.

VS Code: Webview contribution

This extension contributes a Webview, allowing custom UI and potential remote content usage.

VS Code: Extension dependency

This extension depends on other extensions at runtime.

VS Code: Extension pack

This extension packs other extensions.

VS Code: Untrusted workspaces support

This extension declares support for untrusted workspaces, which may relax certain VS Code safety constraints.

VS Code: Virtual workspaces support

This extension supports virtual workspaces (e.g., remote file systems), which changes trust and file access assumptions.