Google’s NotebookLM Indirect Prompt Injection – fix

What’s Google’s NotebookLM Google’s NotebookLM is an experimental project that was released last year. It allows users to upload files and analyze them with a large language model (LLM). NotebookLM is an experimental product designed to use the power and promise of language models paired with your existing content to gain critical insights, faster. Think […]
Google AI Studio Data Exfiltration via Prompt Injection — Fix

What’s Google AI Studio Google AI Studio is a browser-based IDE for prototyping with generative models. Google AI Studio lets you quickly try out models and experiment with different prompts. When you’ve built something you’re happy with, you can export it to code in your preferred programming language and use the Gemini API with it. […]
Indirect Prompt Injection Vulnerability with Slack AI

What’s Slack AI Office and Team Collaboration are typical scenarios for the application of AI technology, and besides Slack, many companies are offering similar products and features, such as: Microsoft Copilot 365 AliBaBa Dingtalk Tiktok Lark Bytedance Feishu Gooogle Gemini Vulnerability background First, unfurling refers to an application expanding (retrieving) a hyperlink automatically to show […]
Planting Delayed Trigger Indirect Prompt Injection — A new attack surface for RAG/AI Assistant/Copilot

What’s Delayed Trigger Attack Delay attacks are very common and highly covert attack method in traditional web security and cloud security. It usually refers to a network attack technique in which attackers inject malicious code or instructions into the target system and delay the injection for a period of time before triggering malicious behavior. The […]