Adversarial Example
A malicious input (data, image, or text) that is indistinguishable to a human but is intentionally and subtly manipulated to cause a machine learning model to make an incorrect classification or decision (e.g., slightly altering a stop sign image so a self-driving car misidentifies it as a speed limit sign).

















