Glossary
AI Compliance
Prompt injection
A type of adversarial attack where a user inputs a malicious or cleverly structured natural language prompt designed to trick a Large Language Model (LLM) into ignoring its internal safety instructions or performing unintended actions (like revealing sensitive information).

















