add-educational-comments
Click to view details and understand why these patterns are dangerous.
Safety Score
Detected Capabilities
Sensitive Files
- Clean filesystem scan
Deep Audit Findings
The skill instructs an AI agent to read, modify, and rewrite code files to add educational comments, while also allowing it to fetch external URLs for references. The primary security concerns include arbitrary file modification (path traversal) and potential Server-Side Request Forgery (SSRF) via the user-controlled 'Fetch List'. Additionally, the static analyzer flagged a potential injection vector due to unusual instructions simulating user keyboard input. Proper sandboxing and human-in-the-loop validation are necessary.
Arbitrary File Modification and Path Traversal Risk
The agent is instructed to modify a 'target file(s)' specified by the user. Without proper path validation or sandboxing controls at the framework level, an attacker could instruct the agent to read or overwrite sensitive system files, configuration files, or scripts outside the intended workspace.
Potential Server-Side Request Forgery (SSRF) via Fetch List
The prompt supports an optional `Fetch List` for URLs. If the agent's underlying runtime automatically fetches these URLs without validation, a malicious user could provide local IP addresses, cloud metadata URLs, or internal endpoints, turning the agent into a proxy for internal network scanning.
Suspicious Keystroke/Input Instruction
The prompt contains an unusual instruction to 'Input data as if typed on the user's keyboard.' While likely intended to enforce plain text output without copy-paste artifacts, in certain automation frameworks, this could be misinterpreted as a command to simulate keystrokes, potentially leading to unintended OS-level interactions.
Risk of Code Corruption via Automated Edits
The agent is instructed to increase the file line count by 125%. LLMs are prone to hallucinations, and forcing such a massive output volume into existing code could inadvertently break execution, alter namespaces, or introduce subtle logic bugs despite the prompt's warnings.
Attack Surface Chain
An attacker provides a malicious target file path (e.g., `../.env` or `/etc/hosts`) or includes an internal IP in the 'Fetch List' parameter.
The agent executes its prompt instructions, attempting to read the targeted out-of-bounds file or making an HTTP request to the internal URL.
The agent rewrites the targeted sensitive file with comments, causing system malfunction or data leakage, or returns information from the internal network (SSRF).