Google’s NotebookLM Indirect Prompt Injection – fix

What’s Google’s NotebookLM Google’s NotebookLM is an experimental project that was released last year. It allows users to upload files and analyze them with a large language model (LLM). NotebookLM is an experimental product designed to use the power and promise of language models paired with your existing content to gain critical insights, faster. Think […]
Google AI Studio Data Exfiltration via Prompt Injection — Fix

What’s Google AI Studio Google AI Studio is a browser-based IDE for prototyping with generative models. Google AI Studio lets you quickly try out models and experiment with different prompts. When you’ve built something you’re happy with, you can export it to code in your preferred programming language and use the Gemini API with it. […]
Indirect Prompt Injection Vulnerability with Slack AI

What’s Slack AI Office and Team Collaboration are typical scenarios for the application of AI technology, and besides Slack, many companies are offering similar products and features, such as: Microsoft Copilot 365 AliBaBa Dingtalk Tiktok Lark Bytedance Feishu Gooogle Gemini Vulnerability background First, unfurling refers to an application expanding (retrieving) a hyperlink automatically to show […]
Planting Delayed Trigger Indirect Prompt Injection — A new attack surface for RAG/AI Assistant/Copilot

What’s Delayed Trigger Attack Delay attacks are very common and highly covert attack method in traditional web security and cloud security. It usually refers to a network attack technique in which attackers inject malicious code or instructions into the target system and delay the injection for a period of time before triggering malicious behavior. The […]
Exploring LLMs(OepnAI) Data Visualization Feature (Code Interpreter), Sandbox Escape

Background Many LLMs support solve math equations and draw charts based on data. What does this mean and why is it interesting? It means that LLMs has access to a computer and can run more complex programs, including Python code that plots graphs! Let’s explore this with a simple example. Drawing Charts with Google Gemini […]
ZHIPU AI Video Call Prompt Jailbreak Vulnerability

Background Recently, the video call function of ZhipuQingyan App has been fully opened to all users. Zhipu calls it an AI product with “eyes”: not limited to typing and voice interaction, there is no need to worry about machine broadcast modes, and it will not feel stiff or cold. The main functions are as follows: […]
You want to improve your resume approval rate? Just use Prompt Injection for your resume

Background In order to improve the efficiency of the HR department, companies are screening your resumes and documents using AI. But there is a way you can still stand out and get your dream job: Prompt Injection. You can inject invisible text into your PDF that will make any AI language model think you are […]
Using FGSM to generate adversarial example

What is an adversarial example? Adversarial examples are specialised inputs created with the purpose of confusing a neural network, resulting in the misclassification of a given input. These notorious inputs are indistinguishable to the human eye, but cause the network to fail to identify the contents of the image. There are several types of such […]
Massive Data Exfiltration Techniques with Coze

The Limitations of Direct Image/Markdown URL Data Exfiltration During an Indirect Prompt Injection Attack an adversary can exfiltrate chat data from a user by instructing ChatBot to render images and append information to the URL (Image Markdown Injection), or by tricking a user to click a hyperlink, like this. However, this method is similar to […]
LLM Apps Plugin-in DDOS Risk: Don't Get Stuck in an Infinite Loop!

What happens if an attacker calls an LLM tool or plugin recursively during an Indirect Prompt Injection? Could this be an issue and drive up costs, or DoS a system? I tried it with Coze, and it indeed works and the Chatbot enters a loop! 😄 However, for Coze users this isn’t really a threat, […]