Hidden malware in markdown comments could spread across entire codebases, experts warn
A newly discovered exploit targeting AI coding assistants has raised cybersecurity concerns for companies like Coinbase, where artificial intelligence now writes nearly half of the exchange’s code. The so-called “CopyPasta License Attack” embeds malicious prompts in common developer files, allowing hidden instructions to propagate through entire codebases without detection.
How the Exploit Works
The technique, revealed by cybersecurity firm HiddenLayer, takes advantage of how AI models interpret documentation. Files like README.md and LICENSE.txt are often treated as authoritative by coding assistants. By hiding instructions in markdown comments, attackers can trick AI tools into replicating harmful code across new files.
“Injected code could stage a backdoor, silently exfiltrate sensitive data or manipulate critical files,” HiddenLayer explained.
Unlike traditional malware, which relies on suspicious executables or scripts, this exploit hides inside trusted documentation. Developers may never notice that their AI assistant is inserting backdoors or siphoning data, since the malicious commands appear to be harmless text.
The exploit is particularly concerning for Coinbase. CEO Brian Armstrong recently revealed that AI now writes up to 40% of the company’s code, with a target of reaching 50% by next month. While Armstrong stressed that sensitive systems adopt AI more slowly, security experts warn that even user interface and backend code can become vectors for attacks if infected files spread through repositories.
Why CopyPasta Is Different
AI prompt injection attacks are not new, but the CopyPasta method introduces a self-propagating element. Instead of targeting one user, the malicious file becomes a carrier that infects every other AI model reading it.
This makes it more insidious than earlier AI worm concepts like Morris II, which failed because human checks could catch anomalies in email workflows. In contrast, documentation is rarely scrutinized, giving CopyPasta an ideal hiding place.
Security Recommendations
Industry experts are now urging development teams to treat all untrusted data entering large language models as potentially malicious. Recommendations include:
- Scanning codebases for hidden markdown comments.
- Conducting manual reviews of AI-generated changes.
- Deploying systematic detection tools before prompt-based malware can scale.
The CopyPasta exploit underscores the risks of rapidly adopting AI in software development without rigorous safeguards. As firms like Coinbase deepen their reliance on AI coding assistants, securing documentation files and monitoring AI outputs will be critical to preventing invisible threats from spreading across entire infrastructures.
Disclaimer
This content is for informational purposes only and does not constitute financial, investment, or legal advice. Cryptocurrency trading involves risk and may result in financial loss.