GenAl Security Research

Introducing Vision To The Fine-Tuning API
Developers Can Now Fine-Tune GPT-40 With Images And Text To Improve Vision Capabilities
Learn More
What an Incredible Evening at the Al x Security Summit!
On October 10th, 2024, I spent an incredible evening in Antler Singapore.
Learn More
S-tron China - S-Talent Talk
On September 20-21, 2024, I spent an unforgettable 2 days in S-tron China at the West Bund Art Center in Shanghai.
Learn More

How ChatGPT Can Lead to Malicious Code Spread

Research reveals that attackers exploit ChatGPT to disseminate malicious packages

How Hallucinations Impact Large Language Models

Monitoring hallucinations in large language models (LLMs) is crucial for

Assessing Language Model Deployment with Risk Cards

Introduction When establishing documentation, reporting or auditing standards, we need

Key Updates in OWASP Top 10 for LLM Applications 2025

Large Language Models (LLMs) face significant security challenges, with many

Key Insights on LLM Evaluation and Vulnerability Testing

Since I shifted my focus from cloud security to LLM

How to conduct LLM Evaluation: Key Metrics and Best Practices

Why need LLM Evaluation? Artificial intelligence technology has yielded exceptional

Subscribe TrustAI Newsletter

Get our latest GenAI/LLM security research.

Join AISecX - AI Security Discord Community

Join the AISecX towards a secure Al era. We're building a safer future together, be part of it!