Developers hunting for an AI productivity boost recently stumbled into a trap. Hackers uploaded a fake extension to the Visual Studio Code marketplace, posing as the Moltbot AI assistant. Instead of delivering helpful coding support, the extension quietly deployed a trojan designed to open the door wide for attackers.
First, let’s explain what Moltbot is. It’s an open source AI assistant (think ChatGPT) that can run locally on a user’s computer, rather than in the cloud like some of the other, perhaps more familiar AI assistants. If you knew it at all, you may have known it by its former name, Clawdbot. The name was changed so it didn’t cause any conflicts with another familiarly named AI assistant.
In this particular attack, the malicious add-on used remote desktop capabilities and layered malware loaders, making detection very difficult. Once installed, it could give threat actors persistent access to a victim’s machine, turning a development environment into a launchpad for deeper compromise.
Fortunately, the attack was identified and stopped quickly. The rogue extension was removed before it could spread widely. However, the incident triggered security warnings, and Moltbot’s website was temporarily flagged as dangerous while the situation was sorted out.
The episode is another reminder that even trusted platforms can be abused. Developers should verify publishers, review download counts and ratings, and be cautious with newly released extensions that lack a track record.
In today’s software supply chain, one careless click can turn helpful AI into harmful access.
UPDATE:
1Password, Hudson Rock, and Token Security have all highlighted serious risks associated with using MoltBot. They warn that its extensive, unrestricted access to sensitive enterprise systems—especially when operated on unmanaged personal devices outside traditional security boundaries—can create high-impact control points if misconfigured.
Token Security reports that 22% of its customers have employees actively using Clawdbot within their organizations. The firm notes that the platform’s lack of sandboxing, combined with its use of plaintext storage for “memories” and credentials, makes it an appealing target for attackers seeking to extract sensitive corporate data.
According to 1Password, compromising a device running MoltBot requires little sophistication. Modern infostealer malware is designed to scan common directories and quickly exfiltrate anything resembling credentials, tokens, session logs, or developer configurations. If the agent stores API keys, webhook tokens, transcripts, or long-term memory in plaintext and predictable locations, attackers can retrieve this data within seconds.
Hudson Rock further reports observing targeted adaptations in major malware-as-a-service (MaaS) families such as RedLine, Lumma, and Vidar, specifically engineered to exploit these directory structures for data theft.
The firm emphasizes that the threat extends beyond simple credential theft. This type of data exposure enables what it describes as “cognitive context theft”—where attackers gain insight into workflows, logic, and operational memory. More concerning is the potential for “agent hijacking.” If an attacker gains write access—such as through a remote access trojan (RAT) deployed alongside an infostealer—they can manipulate stored data, effectively carrying out “memory poisoning” to alter the agent’s behavior.