Real-World Threat

Physical Attacks on AI Systems

Physical attacks manipulate the real-world environment to fool AI systems, posing serious threats to autonomous vehicles, surveillance systems, and robotics.

Attack Techniques

Adversarial Patches

Printed stickers or patterns that cause misclassification when placed in camera view

3D Adversarial Objects

Physical objects designed to be misclassified from multiple viewing angles

Environmental Manipulation

Modifying lighting, shadows, or backgrounds to fool perception systems

Target Systems
  • Autonomous Vehicles: Stop sign misclassification, lane detection evasion
  • Surveillance Systems: Face recognition bypass, person detection evasion
  • Access Control: Biometric authentication spoofing
  • Robotics: Object recognition manipulation, navigation disruption
Notable Examples

Stop Sign Attack

Researchers demonstrated that small stickers on stop signs could cause autonomous vehicles to misclassify them as speed limit signs

Turtle-Rifle Misclassification

3D-printed turtle designed to be classified as a rifle from any viewing angle by image classifiers

Adversarial Glasses

Specially designed eyeglass frames that can fool facial recognition systems or enable impersonation

Defense Strategies

Detection Methods

  • • Multi-sensor fusion for verification
  • • Temporal consistency checking
  • • Anomaly detection in predictions
  • • Context-aware validation

Robustness Techniques

  • • Adversarial training with physical examples
  • • Input preprocessing and filtering
  • • Ensemble models with diverse architectures
  • • Certified defenses for physical perturbations