Recent Posts
Archives

Posts Tagged ‘ThreatModeling’

PostHeaderIcon [DefCon32] Threat Modeling in the Age of AI

As artificial intelligence (AI) reshapes technology, Adam Shostack, a renowned threat modeling expert, explores its implications for security. Speaking at the AppSec Village, Adam examines how traditional threat modeling adapts to large language models (LLMs), addressing real-world risks like biased hiring algorithms and deepfake misuse. His practical approach demystifies AI security, offering actionable strategies for researchers and developers to mitigate vulnerabilities in an AI-driven world.

Foundations of Threat Modeling

Adam introduces threat modeling’s four-question framework: what are we working on, what can go wrong, what are we going to do about it, and did we do a good job? This structured approach, applicable to any system, helps identify vulnerabilities in LLMs. By creating simplified system models, researchers can map AI components, such as training data and inference pipelines, to pinpoint potential failure points, ensuring a proactive stance against emerging threats.

AI-Specific Security Challenges

Delving into LLMs, Adam highlights unique risks stemming from their design, particularly the mingling of code and data. This architecture complicates secure deployment, as malicious inputs can exploit model behavior. Real-world issues, such as AI-driven resume screening biases or facial recognition errors leading to wrongful arrests, underscore the urgency of robust threat modeling. Adam notes that while LLMs excel at specific mitigation tasks, broad security questions yield poor results, necessitating precise queries.

Leveraging AI for Security Solutions

Adam explores how LLMs can enhance security practices. By generating mitigation code or test cases for specific vulnerabilities, AI can assist developers in fortifying systems. However, he cautions against over-reliance, as generic queries produce unreliable outcomes. His approach involves using AI to streamline threat identification while maintaining human oversight, ensuring that mitigations address tangible risks like data leaks or model poisoning.

Future Directions and Real-World Impact

Concluding, Adam dismisses apocalyptic AI fears but stresses immediate concerns, such as deepfake proliferation and biased decision-making. He advocates integrating threat modeling into AI development to address these issues early. By fostering a collaborative community effort, Adam encourages researchers to refine AI security practices, ensuring that LLMs serve as tools for progress rather than vectors for harm.

Links: