Audited on 500+ open-source agents · 20+ frameworks · Open source

The security scanner for AI agents

Scan Microsoft, Google ADK, Python, LangChain, CrewAI, MCP servers, Skills and 20+ frameworks for agent-specific vulnerabilities — in 60 seconds.

Free forever for up to 5 scans/month. CLI + MCP server open source (Apache 2.0).

Inkog Verify — scan completeCRITICAL
CRITICALCWE-74 · OWASP LLM01

Prompt Injection

agent/chains.py:42

Untrusted user input flows directly into the LLM prompt. An attacker can inject instructions to override the agent or exfiltrate data.

vulnerable
prompt = f"Answer this query: {user_input}"
response = llm.invoke(prompt)
fix
prompt = TEMPLATE.format(query=sanitize(user_input))
response = llm.invoke(prompt)
Maps toEU AI Act Art. 15NIST AI RMFCWE-74
ScannedDamnVulnerableLLMProject·2.3s·5 more findings

Built for the modern AI stack

One scanner. Every framework.

LangChainLangChainOpenAIOpenAICrewAICrewAIGoogle ADKGoogle ADKLangGraphLangGraphMicrosoftMicrosoftAutoGenAutoGenAnthropicAnthropicPydanticAIPydanticAILlamaIndexLlamaIndexn8nn8nHuggingFaceHuggingFaceLangChainLangChainOpenAIOpenAICrewAICrewAIGoogle ADKGoogle ADKLangGraphLangGraphMicrosoftMicrosoftAutoGenAutoGenAnthropicAnthropicPydanticAIPydanticAILlamaIndexLlamaIndexn8nn8nHuggingFaceHuggingFace

Three commands. Full security audit.

1

Install

2

Scan

3

Ship with confidence

Get severity-ranked findings, compliance mapping, and remediation guidance.

Or use our MCP server inClaudeClaude &CursorCursor

New · MCP server

Build secure AI agents with Claude, Cursor, and Claude Code

Connect the Inkog MCP server and ask your AI assistant to scan, explain, and fix agent security issues — without ever leaving the conversation.

Scan during development

"Scan this agent for security issues." Findings come back in the same chat.

Explain & fix in-flow

"Explain this finding and apply the fix." No tab switching, no CLI.

Verify governance

"Does my AGENTS.md match the code?" Only Inkog answers this.

Install the MCP server
New

Agent Capability Surface

One score, three layers, every gap mapped to a regulation. The first inventory that tells you what your agents can do, what your AGENTS.md says they should do, and where the controls are missing.

CANcapabilities

Every tool, MCP server, delegation, memory access, and credential the agent can reach. Extracted by the Universal IR across 15 frameworks.

SHOULDdeclarations

Every line of AGENTS.md, parsed across YAML front matter, markdown sections, and inline annotations into typed declarations.

ENFORCEDcontrols

Every control wired in code: human approval, authorization, audit log, rate limit, cycle guard, sanitizer. Indexed against the capability it protects.

91/100
Governance Score
Microsoft AutoGen
multi-agent framework, scanned at HEAD
4 agents, 18 tools, 3 high-severity gaps
See how the score is computed

What you'll see in 60 seconds

Paste a GitHub URL or upload a zip. No install, no config. Here's what comes back.

DamnVulnerableLLMProjectCore + Deep
43
Risk Score/100
3 High · 1 Med · 2 Low
Unvalidated Code Execution in Agent ToolHIGH

Agent tool executes code via eval/exec where input can be influenced by LLM output or prompt injection.

19-output = subprocess.check_output(command, shell=True)
19+output = subprocess.check_output(shlex.split(command), shell=False)
process.py:19·CWE-94CWE-95
Missing Human OversightHIGH

Destructive tool (database write) fires without approval gate

agent/tools.py:45
Recursive Tool CallingHIGH

Tool chain fans out without bounded iteration limit

agent/graph.py:23
Prompt Injection SinkMEDIUM

RAG output flows into system prompt without sanitization

agent/chains.py:67
EU AI Act Art. 14: Human OversightEU AI Act Art. 15: RobustnessLLM02: Insecure Output HandlingNIST AI RMF: Measure

Start scanning in 60 seconds

Free · No setup required · Instant results

30 min · Live Deep Scan on your code · Walkthrough of every finding