Posts Tagged ‘ThreatModeling’
[DefCon32] Threat Modeling in the Age of AI
As artificial intelligence (AI) reshapes technology, Adam Shostack, a renowned threat modeling expert, explores its implications for security. Speaking at the AppSec Village, Adam examines how traditional threat modeling adapts to large language models (LLMs), addressing real-world risks like biased hiring algorithms and deepfake misuse. His practical approach demystifies AI security, offering actionable strategies for researchers and developers to mitigate vulnerabilities in an AI-driven world.
Foundations of Threat Modeling
Adam introduces threat modeling’s four-question framework: what are we working on, what can go wrong, what are we going to do about it, and did we do a good job? This structured approach, applicable to any system, helps identify vulnerabilities in LLMs. By creating simplified system models, researchers can map AI components, such as training data and inference pipelines, to pinpoint potential failure points, ensuring a proactive stance against emerging threats.
AI-Specific Security Challenges
Delving into LLMs, Adam highlights unique risks stemming from their design, particularly the mingling of code and data. This architecture complicates secure deployment, as malicious inputs can exploit model behavior. Real-world issues, such as AI-driven resume screening biases or facial recognition errors leading to wrongful arrests, underscore the urgency of robust threat modeling. Adam notes that while LLMs excel at specific mitigation tasks, broad security questions yield poor results, necessitating precise queries.
Leveraging AI for Security Solutions
Adam explores how LLMs can enhance security practices. By generating mitigation code or test cases for specific vulnerabilities, AI can assist developers in fortifying systems. However, he cautions against over-reliance, as generic queries produce unreliable outcomes. His approach involves using AI to streamline threat identification while maintaining human oversight, ensuring that mitigations address tangible risks like data leaks or model poisoning.
Future Directions and Real-World Impact
Concluding, Adam dismisses apocalyptic AI fears but stresses immediate concerns, such as deepfake proliferation and biased decision-making. He advocates integrating threat modeling into AI development to address these issues early. By fostering a collaborative community effort, Adam encourages researchers to refine AI security practices, ensuring that LLMs serve as tools for progress rather than vectors for harm.
Links:
[DotSecurity2017] Secure Software Development Lifecycle
In the forge of functional fortification, where code coalesces into capabilities, embedding security sans sacrificing swiftness stands as the alchemist’s art. Jim Manico, founder of Manicode Security and erstwhile OWASP steward, alchemized this axiom at dotSecurity 2017, furnishing frameworks for fortifying the software development lifecycle (SDLC) from inception to iteration. A Hawaiian hui of secure coding savant, Jim’s odyssey—from Siena’s scrolls to Edgescan’s enterprise—equips his edicts with empirical edge, transforming tedious tenets into tactical triumphs that temper expense through early engagement.
Jim’s jaunt journeyed SDLC’s stations: analysis’s augury (requirements’ rigor, threats’ taxonomy), design’s delineation (architectural audits, data flow diagrams), coding’s crucible (checklists’ chisel, libraries’ ledger), testing’s tribunal (static sentinels, dynamic drills), operations’ observatory (monitoring’s mantle, incident’s inquest). Agile’s alacrity or waterfall’s wash notwithstanding, phases persist—analysis’s abstraction a month or minute, testing’s tenacity from triage to telemetry. Jim jabbed at jargon: process’s pallor palls without practicality—checklists conquer compendiums, triage trumps torrent.
Requirements’ realm reigns: OWASP’s taxonomy as talisman—access’s armature, injection’s inveiglement—blueprints birthing bug bounties. Design’s domain: threat modeling’s mosaic (STRIDE’s strata: spoofing’s specter to tampering’s thorn), data’s diagram (flows fortified, endpoints etched). Coding’s canon: Manicode’s missives—input’s inquisition (sanitization’s sieve), output’s oracle (encoding’s aegis)—libraries’ litany (npm’s audit, Snyk’s scrutiny). Testing’s tier: static’s scalpel (SonarQube’s scan, Coverity’s critique—rules’ rationing for relevance), dynamic’s delve (DAST’s dart, IAST’s insight). Operations’ oversight: logging’s ledger (anomalies’ alert), patching’s patrol (vulnerabilities’ vigil).
Jim’s jeremiad: late lamentations lavish lucre—early excision economizes, triage tempers toil. Static’s sacrament: compilers’ cognizance, rules’ refinement—devops’ deployment, developers’ deliverance from deluge.
SDLC’s Stations and Security’s Scaffold
Jim mapped milestones: analysis’s augury, design’s diagram—coding’s checklist, testing’s tier. Operations’ observatory: monitoring’s mantle, incident’s inquest.
Tenets’ Triumph and Tools’ Temperance
OWASP’s oracle, threat’s taxonomy—static’s scalpel, dynamic’s delve. Jim’s jewel: early’s economy, triage’s temperance—checklists conquer, compendiums crumble.