DEV Community

Cover image for GitHub’s New MCP Can Spill Your Secrets—No Hacking Required
Kevin Raposo for KnowTechie

Posted on

GitHub’s New MCP Can Spill Your Secrets—No Hacking Required

GitHub’s Model Context Protocol (MCP) just landed in hot water, thanks to a newly discovered vulnerability that lets attackers trick AI agents into leaking private repository information.

Security researchers Marco Milanta and Luca Beurer-Kellner stumbled on an exploit where an attacker can file a sneaky issue in a public repo.

If a user asks an LLM agent connected to MCP to “check the issues,” the agent follows the attacker’s instructions—like digging into all the user’s private repos—and then exposes that info in a public pull request.

No malware, no brute force, just a well-crafted prompt and a bit of bad architecture.

Here’s the kicker: this isn’t a bug in the code—it’s a design flaw

The holy trinity for prompt injection attacks

Image image showing prompt to hack Github MCP

According to DevClass, the MCP server gives LLMs access to private data, lets them process attacker-controlled prompts, and allows them to exfiltrate information, all at once.

Security folks are already warning that there’s no obvious fix in sight. The only advice? If you’re using MCP, treat it like a loaded gun around anything private.

The attack doesn’t require elite skills—just a clever issue and a bit of trust in the wrong place (Invariant Labs).

As of now, GitLab hasn’t released an official statement or any mitigation. So if you’re experimenting with MCP, keep your secrets close and your AI agents on a tight leash.

Top comments (0)