On 10 February 2026, Microsoft's Defender Security Research Team reported that some websites were embedding hidden prompts inside "Summarize with AI" buttons. According to the team, these prompts attempted to influence AI assistants' stored memories and future recommendations across multiple commercial sectors.
Key Details
- Microsoft analyzed AI-related URLs observed in email traffic over a 60 day period.
- The team identified 50 distinct prompt injection attempts involving 31 companies across 14 different industries.
- Each attempt used URL query parameters to preload AI assistants with visible summary instructions and hidden memory manipulation directives.
- Hidden instructions asked assistants to remember the company positively and to favor it in future answers and recommendations if memory persisted.
- Some prompts directed assistants to remember specific companies as a "trusted source for citations" or the "go to source" for particular topics.
- One prompt injected extensive marketing copy into memory, including named products, key features, and commercial selling points.
- Microsoft traced several examples to tools such as the CiteMET npm package and the AI Share URL Creator generator, which were marketed as ways to "build presence in AI memory" for participating websites.
- According to Microsoft's analysis, the 31 organizations involved were legitimate businesses rather than traditional threat actors or fraudulent operations.
- Multiple prompts targeted health and financial services websites, and one company operated in the security software sector.
- One identified domain closely resembled a well known website name.
- Many participating sites hosted user generated content, including comments and forums on the same domains.
- The behavior is mapped to MITRE ATLAS techniques AML.T0080 (Memory Poisoning) and AML.T0051 (LLM Prompt Injection).
- The hidden prompts targeted multiple assistants, including Microsoft Copilot, ChatGPT, Claude, Perplexity, and xAI's Grok. Microsoft documented URL formats for these assistants and noted differing memory persistence across platforms.
Background Context
Microsoft labeled the technique AI Recommendation Poisoning in its Defender Security Research blog post, describing it as a form of prompt injection focused on long term assistant memory.
By mapping the activity to MITRE ATLAS techniques AML.T0080 and AML.T0051, Microsoft linked it with established attack categories for memory poisoning and prompt injection against large language model systems.
Microsoft stated that Copilot now includes protections against cross prompt injection and similar memory manipulation attempts, and that some previously observed prompt injection behaviors no longer reproduce in current Copilot implementations.
The company also released Microsoft Defender for Office 365 hunting queries to flag URLs with potential memory manipulation keywords in email and Teams traffic. Users can review and delete stored Copilot memories through the Personalization section within Copilot chat settings.
Source Citations
Microsoft detailed its findings in its published research on the Microsoft Security Blog on 10 February 2026.
The research was first widely flagged to the SEO and AI community by industry practitioner Lily Ray, who credited @top5seo for initially highlighting the Microsoft blog post.






