RESEARCH
24 March 2026
•
7 min read
Northeastern researchers deployed 6 autonomous AI agents unsupervised. They leaked data, taught each other to bypass safety rules, and one tried to delete a mail server. The failure was always the skills.
Read the research
CRITICAL
23 March 2026
•
5 min read
Trivy GitHub Action compromised 12,000+ projects by scraping API keys from memory. Cargo CVE-2026-33056 enables build-time code execution. Both hit AI skill developers this week. Here's what to do now.
Read the incident analysis
FRAMEWORK
23 March 2026
•
12 min read
OWASP just published the first security framework for AI agent tools — and the numbers are alarming: 84.2% tool poisoning success rate. This article maps each risk to what SkillShield detects.
Read the checklist
CASE STUDY
23 March 2026
•
10 min read
In February 2026, a developer installed a skill into their OpenClaw agent. Within two weeks, they discovered their API keys, service tokens, and browser cookies had been systematically exfiltrated.
Read the case study
FRAMEWORK
22 March 2026
•
9 min read
Databricks released DASF v3.0 on March 20, 2026 — adding 35 new agentic AI security risks and 6 new controls. Learn how SkillShield maps to the Databricks AI Security Framework.
Read the framework mapping
COMPARISON
22 March 2026
•
8 min read
AgentAudit found 35% of MCP packages have vulnerabilities. Compare SkillShield vs MCP-Shield to see which MCP security scanner fits your workflow.
Read the comparison
SECURITY
21 March 2026
•
6 min read
Gravitee's 2026 survey of 900+ orgs: 88% had AI agent security incidents, but only 14.4% have approval gates. The confidence paradox, the supply chain vector, and how pre-deployment scanning closes the gap.
Read the research
GUIDE
21 March 2026
•
5 min read
This guide walks through a practical audit of your installed agent skills using the OWASP framework as the checklist, with SkillShield as the scanning layer for the categories it covers.
Read the guide
RESEARCH
21 March 2026
•
5 min read
That's the finding from analysis of over 30,000 skills across major registries. 25% of the skills developers are installing — running with access to their file systems, API keys, and deve...
Read the research
RESEARCH
20 March 2026
•
5 min read
In February 2026, security researchers at Koi Security identified a coordinated malware campaign targeting OpenClaw developers through ClawHub, the primary skill marketplace. The campaign...
Read the research
GUIDE
20 March 2026
•
5 min read
In early 2026, OWASP launched the Agent Security Initiative (ASI) — the first comprehensive security standard for AI agent systems. ASI04 specifically addresses agent tool and skill secur...
Read the guide
RESEARCH
20 March 2026
•
5 min read
OWASP released the Top 10 for Agentic Applications in late 2025, and it's already the canonical reference for AI agent security. Developed by 100+ industry experts, researchers, and pract...
Read the research
GUIDE
20 March 2026
•
5 min read
In March 2026, the Model Context Protocol (MCP) ecosystem experienced what security researchers are calling a "CVE burst" — a cluster of related security vulnerabilities disclosed within...
Read the guide
GUIDE
19 March 2026
•
5 min read
CVE-2026-25253 isn't a subtle misconfiguration. It's a CVSS 9.6 — critical — remote code execution vulnerability that allows an attacker to take over an AI agent runtime via WebSocket bef...
Read the guide
CRITICAL
18 March 2026
•
8 min read
Krebs on Security documented CurXecute: a malicious MCP server instructed Cursor AI to execute arbitrary shell commands without user approval. Here's why pre-execution scanning is essential.
Read the analysis
ANALYSIS
17 March 2026
•
10 min read
OpenAI's Agents SDK guardrails have specific coverage boundaries. Learn what they handle, where the gaps are, and why tool-level review is still essential for production security.
Read the analysis
RESEARCH
17 March 2026
•
5 min read
In February 2026, Snyk scanned 3,984 skills on ClawHub and found 36% contain security flaws — 76 confirmed malicious payloads still active. Here's what ToxicSkills means for OpenClaw users, and why static scanners miss 60% of the risk.
Read the research
GUIDE
16 March 2026
•
5 min read
Claude Code is one of the most capable AI coding agents available. It can read files, execute commands, query databases, and interact with APIs. But with that power comes risk — especiall...
Read the guide
GUIDE
15 March 2026
•
5 min read
Practical guide to agent-scoped MCP isolation: implement strict tool boundaries, subagent-only access, and predictable scope propagation in Claude Code and OpenAI Agents.
Read the guide
GUIDE
15 March 2026
•
12 min read
Integrate SkillShield into GitHub Actions, GitLab CI, and containerized workflows. Block merges on high-severity findings and automate skill security scanning.
Read the guide
RESEARCH
14 March 2026
•
5 min read
When you authenticate to a service via an MCP server, who exactly is making that request?
Read the research
ANALYSIS
13 March 2026
•
7 min read
Container isolation protects the host from the agent. Skill scanning protects the agent from the skill. Learn why production AI agents need both security layers.
Read the analysis
GUIDE
13 March 2026
•
5 min read
You install an MCP server from npm. The code looks clean — no suspicious network calls, passes npm audit, the tool definitions look standard. You run it. Your AI agent behaves unexpectedl...
Read the guide
GUIDE
12 March 2026
•
5 min read
Before installing any AI agent skill from GitHub, run through this checklist. It takes 2 minutes and could save you hours of security cleanup. ## ☐ 1. Check the Author
Read the guide
GUIDE
12 March 2026
•
5 min read
Claude Code, GitHub Copilot, and OpenAI Codex all use the same portable skill format: SKILL.md. Anyone can create one. Anyone can publish it to GitHub. And anyone can install it.
Read the guide
GUIDE
12 March 2026
•
5 min read
Hard-coded API keys in MCP skill definitions are one of the most common and least visible security issues in AI agent deployments. Here's how SkillShield finds what git scanners miss.
Read the guide
GUIDE
12 March 2026
•
5 min read
SkillShield and sandboxing tools like Agent Safehouse solve different AI agent security problems. A practical FAQ on threat models, implementation order, integration, and risk scoring.
Read the guide
GUIDE
12 March 2026
•
5 min read
OpenClaw skills can exfiltrate data, harvest credentials, and escalate privileges — before you ever notice. This guide shows how to integrate SkillShield into your OpenClaw workflow with pre-install hooks, CI/CD pipelines, and batch audits.
Read the guide
GUIDE
12 March 2026
•
5 min read
allowed-tools (or ## Tools in SKILL.md format) is the permission boundary that defines what an AI agent skill is allowed to do.
Read the guide
GUIDE
11 March 2026
•
5 min read
AI agents can fail in 11 distinct, documented ways. Here's every attack type — from prompt injection to cross-agent propagation — with the research behind each one and what you can do about it.
Read the guide
ANALYSIS
10 March 2026
•
5 min read
Seven MCP security scanners now compete to protect AI agent skills. We compare SkillShield, Vett, Aguara, Tork-scan, armor1.ai, Snyk Agent Scan, and JadeGate — coverage, methods, and which to use.
Read the analysis
HIGH
10 March 2026
•
5 min read
Indirect PII extraction is one of the hardest AI agent security problems to catch. Here's how attackers pull private data out of agents without ever asking for it directly — and what you can actually do about it.
Read the analysis
ANALYSIS
9 March 2026
•
5 min read
Filesystem sandboxing constrains what an AI agent can access. Skill vetting constrains what it will run. These are different threat models — here's why production agents need both.
Read the analysis
HIGH
9 March 2026
•
5 min read
Cross-agent propagation is what happens when a single compromised skill infects other agents in your stack without ever touching the chat layer. Here's how it works, why it's worse than prompt injection, and what you can actually do about it.
Read the analysis
HIGH
9 March 2026
•
5 min read
Identity spoofing in AI agents happens when one agent impersonates another to gain elevated permissions or extract data. Here's how it works and why it's hard to detect.
Read the analysis
ANALYSIS
8 March 2026
•
5 min read
A 38-author paper used OpenClaw to red-team live AI agents for two weeks. Here are the 11 AI agent security vulnerabilities it exposed — and which ones SkillShield can test today.
Read the analysis
ANALYSIS
8 March 2026
•
5 min read
AgentSeal tests your agent's runtime behavior. SkillShield scans what your agent installs. Here's why you need both — and what each one actually catches.
Read the analysis
HIGH
7 March 2026
•
5 min read
Running a scan doesn't make AI agent skills safe. Here's why point-in-time checks fail — and what continuous ai agent skill security monitoring actually requires.
Read the analysis
CRITICAL
5 March 2026
•
5 min read
An independent OWASP Agentic audit of the OpenClaw ecosystem found 2,200 malicious skills and 9 CVEs. Static scanning caught 35 issues per skill. Runtime is where the real risk lives.
Read the analysis
CRITICAL
5 March 2026
•
5 min read
A malicious tool response can silently redirect an AI agent's actions — exfiltrating data, triggering unintended calls, or corrupting entire reasoning chains. Here's how tool poisoning works and what to log.
Read the analysis
HIGH
3 March 2026
•
5 min read
Malicious MCP servers are hiding in unofficial registries. Here's the supply chain risk surface plugin marketplace operators need to address now.
Read the analysis
GUIDE
6 February 2026
•
5 min read
Learn how to vet AI skills before installing them. Our AI skill security checklist stops malicious plugins. Protect your agent from 12% malicious rate.
Read the guide
EDUCATION
6 February 2026
•
5 min read
OWASP AI security framework for agents explained. Discover the top 10 AI agent vulnerabilities with real-world examples. Protect your systems today.
Read the analysis
HIGH
6 February 2026
•
5 min read
Real prompt injection examples from Moltbook and ClawHub. AI security threats target agents at scale. Learn detection tactics and protect your system now.
Read the analysis
CRITICAL
6 February 2026
•
5 min read
ClawHub security exposed: 32.6% CRITICAL risk rate. AI marketplace vulnerabilities include credential theft and zero vetting. Scan your skills now.
Read the analysis
HIGH
6 February 2026
•
5 min read
386 fake crypto skills. One C2 server. 7,000+ downloads before anyone noticed. Inside the largest malicious AI skill campaign found on ClawHub — and what to look for.
Read the analysis
CRITICAL
6 February 2026
•
5 min read
We ran an AI skill security scan on 1,676 agent skills—12% were malicious AI skills. 461 scored CRITICAL. Scan your skills before they scan you.
Read the analysis
CRITICAL
6 February 2026
•
5 min read
Our AI skill scanner found 461 critical threats before Meller's report. SkillShield review confirms: 12% of AI skills are malicious. Scan yours now.
Read the analysis