How to prevent LLM Jailbreak attacks

Why this happened? Generative AI brings incredible new capabilities to businesses. But it also creates entirely new types of security risks, such as prompt injection or llm jailbreak, which threat actors can employ to abuse applications and services that leverage generative AI. That’s why having a plan to protect against llm jailbreak is critical for […]

Artificial Intelligence: The new attack surface

Background Anytime something new comes along, there’s always going to be somebody that tries to break it. AI is no different and this is why it seems we can’t have nice things. In fact, we’ve already seen more than 6000 research papers, exponential growth, that have been published related to adversarial AI examples. Now in […]