Malicious VSCode AI Assistants Directly Target Developers
Over a million developers are affected.
AI is everywhere now and AI coding assistants are becoming just as abundant as coffee shops. They can suggest code, explain errors, write functions, and review pull requests. Just about every developer marketplace is flooded with them - ChatGPT wrappers, Copilot alternatives, code completion tools promising to 10x your productivity.
Most developers probably install these tools without a second thought since they are in the official marketplace. Many have thousands of reviews and, most importantly, they work. So, they grant them access to their workspaces, files, keystrokes - and assume the tools only use that access to help them code. Unfortunately, that is not always the case.
On January 22 security researchers at Koi Security published information declaring that two Microsoft Visual Studio AI coding (VSCode) assistants were very malicious. The two extensions in question are:
ChatGPT – 中文版 (publisher: WhenSunset, 1.34 million installs)
ChatMoss (CodeMoss) (publisher: zhukunpeng, 150k installs)
Both are marketed as AI coding assistants, and both operate exactly as advertised. It has been found that both contain identical spyware that sends everything in your workspace to servers in China. The security research at Koi Security named this malicious campaign MaliciousCorgi.
The functionality of these assistants is normal. The user has the ability to select code, ask a question, and get a helpful AI-powered response. The extension also provides inline autocomplete - just like GitHub Copilot. As you type, it reads about 20 lines of context around your cursor and sends it to the AI server for suggestions. This is normal and expected behavior. Since the AI coding assistants need to read some of your code to help you write more code.
Three Channels of Information Collection
The researchers were able to find three channels of operation. The first channel watches every file you touch. The extension registers two listeners called “onDidOpenTextDocument” and “onDidChangeTextDocument”. So not just files you edit, but every file you open will be read, encoded in `Base64` and sent through a hidden iframe. Every character you type triggers another transmission. Normal AI assistants send approximately 20 lines of context around your cursor. These extensions send the entire file, every single time.
The second channel is worse. It is mass file harvesting mechanism that can send back your files whenever it wants, without you doing anything. This is triggered remotely from the server responses. The extension has the ability harvests up to 50 files from the developer’s workspace and sends them out without the developer noticing.
The third channel is a profiling engine. The malicious actors actively build a profile on the unsuspecting developer to determine if they and their code are a good target. A zero-pixel invisible iframe loads four commercial analytics platforms: Zhuge.io, GrowingIO, TalkingData, and Baidu Analytics. The page title in the source code is “ChatMoss数据埋点” which translates to “ChatMoss Data Tracking.” These platforms track your behavior, fingerprint your device, and figure out where you work and what you are working on. They are figuring out whose code is worth stealing.
Timeframe
Both malicious VScode AI Assistances have been available for over a year. Microsoft is currently investigating both plugins but at the time of this writing, both are still available in the Visual Studio Marketplace. Previously, there were some suspicions raised in October, 2025 that these extensions could be malicious, but these suspicions were not acted upon. Now, it is highly advised to remove both extensions immediately due to the high security risks that they pose.
Huge Risks
The dangers for developers are real because the assets that they have access to are highly valuable to malicious actors. Developers have access to the code, the servers, and more. To outline just a few things that hackers desire to obtain directly from developers:
The .env files with API keys and database passwords.
Config files with server endpoints.
Cloud credentials.
SSH keys.
Proprietary source code.
Features you have not shipped yet.
Recommendations
Developers are prized targets. All their environments should have a full stack of security tools including full network SOC monitoring. Unfortunately, this practice largely seems to be ignored. The proof is clearly visible in the fact that with so much data going to Chinese servers, SOC monitoring would have noticed, but only if it was present. With so many developers working in remote, in largely unmonitored and unsecured environments, nobody would notice this type of attack until a security researcher figured all of this out. Now it is too late. Endpoint protection software alone would not be enough to even notice this type of attack.
Developers using these malicious VScode AI Assistances will need to clean their environments, reset SSH keys, change all their credentials, get new servers spun up, and that may just be the beginning.
For more information refer to the original article by Koi Security.

