Posts Tagged ‘Cybersecurity’
[DefCon32] Taming the Beast: Inside Llama 3 Red Team Process
As large language models (LLMs) like Llama 3, trained on 15 trillion tokens, redefine AI capabilities, their risks demand rigorous scrutiny. Alessandro Grattafiori, Ivan Evtimov, and Royi Bitton from Meta’s AI Red Team unveil their methodology for stress-testing Llama 3. Their process, blending human expertise and automation, uncovers emergent risks in complex AI systems, offering insights for securing future models.
Alessandro, Ivan, and Royi explore red teaming’s evolution, adapting traditional security principles to AI. They detail techniques for discovering vulnerabilities, from prompt injections to multi-turn adversarial attacks, and assess Llama 3’s resilience against cyber and national security threats. Their open benchmark, CyberSecEvals, sets a standard for evaluating AI safety.
The presentation highlights automation’s role in scaling attacks and the challenges of applying conventional security to AI’s unpredictable nature, urging a collaborative approach to fortify model safety.
Defining AI Red Teaming
Alessandro outlines red teaming as a proactive hunt for AI weaknesses, distinct from traditional software testing. LLMs, with their vast training data, exhibit emergent behaviors that spawn unforeseen risks. The team targets capabilities like code generation and strategic planning, probing for exploits like jailbreaking or malicious fine-tuning.
Their methodology emphasizes iterative testing, uncovering how helpfulness training can lead to vulnerabilities, such as hallucinated command flags.
Scaling Attacks with Automation
Ivan details their automation framework, using multi-turn adversarial agents to simulate complex attacks. These agents, built on Llama 3, attempt tasks like vulnerability exploitation or social engineering. While effective, they struggle with long-form planning, mirroring a novice hacker’s limitations.
CyberSecEvals benchmarks these risks, evaluating models across high-risk scenarios. The team’s findings, shared openly, enable broader scrutiny of AI safety.
Cyber and National Security Threats
Royi addresses advanced threats, including attempts to weaponize LLMs for cyberattacks or state-level misuse. Tests reveal Llama 3’s limitations in complex hacking, but emerging techniques like “obliteration” remove safety guardrails, posing risks for open-weight models.
The team’s experiments with uplifting non-expert users via AI assistance show promise but highlight gaps in achieving expert-level exploits, referencing Google’s Project Naptime.
Future Directions and Industry Gaps
The researchers advocate integrating security lessons into AI safety, emphasizing automation and open-source collaboration. Alessandro notes the psychological toll of red teaming, handling extreme content like nerve gas research. They call for more security experts to join AI safety efforts, addressing gaps in testing emergent risks.
Their work, supported by CyberSecEvals, sets a foundation for safer AI, urging the community to explore novel vulnerabilities.
Links:
[DefCon32] Smishing Smackdown: Unraveling the Threads of USPS Smishing and Fighting Back
In an era where digital scams proliferate, SMS phishing, or smishing, has surged, exploiting trust in institutions like the United States Postal Service (USPS). S1nn3r, a red team operator and founder of Phantom Security Group, recounts her journey tackling the “Smishing Triad,” a sophisticated operation distributing scam kits. Motivated by personal encounters with these fraudulent texts, S1nn3r’s investigation uncovers vulnerabilities in the kits, enabling access to their admin panels and exposing over 390,000 stolen credit card details across 900 domains.
S1nn3r’s expertise in web application testing, honed through bug bounties, drives her to reverse-engineer these kits. Collaborating with peers, she identifies two critical flaws, granting entry to administrative interfaces. This access reveals not only victim data but also scammer details like login IPs and passwords. Her findings, shared with banks and the USPS Inspector’s Office, aid in protecting nearly 880,000 victims, highlighting the power of proactive cybersecurity.
The talk illuminates the technical ingenuity behind smishing campaigns and offers strategies to combat them, emphasizing client-side filtering to thwart future attacks.
Anatomy of the Smishing Triad
S1nn3r begins by dissecting the USPS smishing campaign, which spiked during the holiday season. These messages, mimicking USPS alerts, lure users to fraudulent sites via links. The Smishing Triad’s kit, a scalable tool sold to scammers, automates these attacks, capturing credentials and financial data.
Through meticulous analysis, S1nn3r uncovers the kit’s structure, leveraging web vulnerabilities to infiltrate admin panels. This access exposes databases containing victim information, revealing the campaign’s vast reach.
Exploiting Kit Vulnerabilities
The investigation reveals two pivotal weaknesses: insecure authentication and misconfigured APIs. By exploiting these, S1nn3r gains administrative control, extracting data from over 40 panels. This includes scammer metadata, such as IPs and cracked passwords, offering insights into their operations.
Her collaboration with a Wired journalist and law enforcement underscores the real-world impact, linking stolen credit cards to specific scams. This evidence strengthens investigations, despite challenges in victim identification.
Countermeasures and Future Defenses
S1nn3r advocates enhanced client-side filtering, suggesting AI-driven solutions to detect suspicious texts. Third-party integrations, like Truecaller, offer practical defenses by flagging non-official USPS links. She cautions against man-in-the-middle attacks on SMS, emphasizing scalable, user-friendly protections.
Her work, shared via open-source tools, invites further research to dismantle smishing ecosystems, urging collective action against evolving scams.
Links:
[DefCon32] OH MY DC: Abusing OIDC All the Way to Your Cloud
As organizations migrate from static credentials to dynamic authentication protocols, overlooked intricacies in implementations create fertile ground for exploitation. Aviad Hahami, a security researcher at Palo Alto Networks, demystifies OpenID Connect (OIDC) in the context of continuous integration and deployment (CI/CD) workflows. His examination reveals vulnerabilities stemming from under-configurations and misconfigurations, enabling unauthorized access to cloud environments. By alternating perspectives among users, identity providers, and CI vendors, Aviad illustrates attack vectors that compromise sensitive resources.
Aviad begins with foundational concepts, clarifying OIDC’s role in secure, short-lived token exchanges. In CI/CD scenarios, tools like GitHub Actions request tokens from identity providers (IdPs) such as GitHub’s OIDC provider. These tokens, containing claims like repository names and commit SHAs, are validated by workload identity federations (WIFs) in clouds like AWS or Azure. Proper configuration ensures tokens originate from trusted sources, but lapses invite abuse.
Common pitfalls include wildcard allowances in policies, permitting access from unintended repositories. Aviad demonstrates how fork pull requests (PRs) exploit these, granting cloud roles without maintainer approval. Such “no configs” scenarios, where minimal effort yields high rewards, underscore the need for precise claim validations.
Advanced Configurations and Misconfigurations
Delving deeper, Aviad explores “advanced configs” that inadvertently become misconfigurations. Features like GitHub’s ID token requests for forks introduce risks if not explicitly enabled. He recounts discovering a vulnerability in CircleCI, where reusable configurations allowed token issuance to forks, bypassing protections.
Shifting to the IdP viewpoint, Aviad discloses a real-world flaw in a popular CI vendor, permitting token claims from any repository within an organization. This enabled cross-project escalations, compromising clouds via simple PRs. Reported responsibly, the issue prompted fixes, emphasizing the cascading effects of IdP errors.
He references Tinder’s research on similar WIF misconfigurations, reinforcing that even sophisticated setups falter without rigorous claim scrutiny.
Exploitation Through CI Vendors
Aviad pivots to CI vendor responsibilities, highlighting how their token issuance logic influences downstream security. In CircleCI’s case, a bug allowed organization-wide token claims, exposing multiple projects. By requesting tokens in fork contexts, attackers could satisfy broad WIF conditions, accessing clouds undetected.
Remediation involved opt-in mechanisms for fork tokens, mirroring GitHub’s approach. Aviad stresses learning claim origins per IdP, avoiding wildcards, and hardening pipelines to prevent trivial breaches.
His tool for auditing Azure CLI configurations exemplifies proactive defense, aiding in identifying exposed resources.
Broader Implications for Secure Authentication
Aviad’s insights extend beyond CI/CD, advocating holistic OIDC understanding to thwart supply chain attacks. By dissecting entity interactions—users, IdPs, and clouds—he equips practitioners to craft resilient policies.
Encouraging bounty hunters to probe these vectors, he underscores OIDC’s maturity yet persistent gaps. Ultimately, robust configurations transform OIDC from vulnerability to asset, safeguarding digital infrastructures.
Links:
[DefCon32] Where’s the Money? Defeating ATM Disk Encryption
In an era where automated teller machines safeguard substantial sums, vulnerabilities in their protective mechanisms pose significant threats. Matt Burch, an independent security researcher specializing in IoT and hardware, unveils critical flaws in Diebold Nixdorf’s Vynamic Security Suite (VSS), the dominant solution in the sector. His investigation, conducted alongside a colleague, exposes six zero-day issues enabling full system compromise within minutes, highlighting systemic risks across financial, gaming, and retail domains.
Matt’s journey stems from a fascination with the surge in ATM crimes, up over 600% recently. Targeting enterprise-grade units holding up to $400,000, he dissects VSS’s full disk encryption, revealing offline code injection and decryption paths. Diebold Nixdorf, one of three primary North American manufacturers with global reach, deploys VSS widely, including in Las Vegas casinos.
ATM architecture divides into a fortified vault for currency and a less secure “top hat” housing computing elements. The vault features robust steel, multi-factor locks, and tamper sensors, while the upper section uses thin metal and vulnerable locks, facilitating entry via simple tools.
VSS integrates endpoint protection, whitelisting, and encryption, yet Matt identifies gaps in its pre-boot authentication (PBA). This layered integrity check, spanning phases, fails to prevent unauthorized access.
Pre-Boot Authentication Vulnerabilities
VSS’s PBA employs a custom Linux-based “Super Sheep” OS for initial validation. Phase one mounts the Windows partition read-only, computing SHA-256 sums for critical files. Successful checks lead to phase two, decrypting and booting Windows.
Matt exploits unencrypted Linux elements, mounting drives offline to inject code. CVE-2023-33204 allows execution via root’s .profile, bypassing sums by targeting non-checked files. Demonstrations show callback shells post-reboot.
Service releases introduce patches, yet recursive flaws emerge. CVE-2023-33205 leverages credential stuffing on weak defaults, enabling admin escalation and command injection.
Recursive Flaws and Persistence
Patches inadvertently create new vectors. Service Release 15 removes directories but overlooks symlinks, leading to CVE-2023-33206. Attackers create traversal links, injecting payloads into root’s directory for execution.
Further updates validate symlinks, yet CVE-2023-40261 strips execute permissions from integrity modules, disabling VSS entirely. This permits arbitrary unsigned binaries, affecting all versions.
Impacts span VSS iterations, with vulnerabilities identified from April to July 2023. Diebold’s responses include end-of-life for older variants and redesigns.
Mitigation and Broader Implications
Defending requires patching to latest releases, enabling security checks to withhold TPM keys, and monitoring physical access. Disabling unused ports and securing drives reduce risks. Alternatives like BitLocker offer robust encryption.
Matt’s findings influence Diebold’s Version 4.4, introducing full Super Sheep encryption. Yet, persistent unencrypted elements suggest ongoing challenges, with likely undiscovered exploits.
This research underscores the need for comprehensive security in high-value systems, where physical and digital safeguards must align.
Links:
[DefCon32] Welcome to DEF CON 32
Amid the vibrant energy of a gathering that has evolved over decades, Jeff Moss, founder of DEF CON, extends a heartfelt invitation to participants, emphasizing the essence of community and shared discovery. Drawing from his experiences since initiating the event 32 years ago, Jeff reflects on its growth from a modest assembly to a sprawling nexus of innovation. His remarks serve as an orientation, guiding attendees through the philosophy that underpins the conference, while encouraging them to forge their own paths in a landscape brimming with possibilities.
Jeff underscores the principle that the event’s value lies in individual contributions, acknowledging the impossibility of experiencing every facet. Early iterations allowed him to witness all activities, yet as attendance swelled, he embraced the reality of missing moments, transforming it into motivation for expanding offerings. This mindset fosters an environment where participants can prioritize personal interests, whether technical pursuits or interpersonal connections.
The structure facilitates meaningful interactions by segmenting the crowd into affinity clusters, such as those focused on automotive exploits or physical barriers. Such divisions enable deeper engagements, turning vast numbers into intimate collaborations. Jeff highlights the encouragement of inquiry, recognizing the specialization driven by technological complexity, which renders no single expert all-knowing.
Origins and Inclusivity
Tracing the roots, Jeff recounts how exclusion from invite-only gatherings inspired an open-door policy, rejecting seasonal naming to avoid constraints. This decision marked a pivotal divergence, prioritizing accessibility over restriction. Growth necessitated strategies to manage scale without diluting intimacy, leading to diverse tracks and villages that cater to niche passions.
The ethos promotes authenticity, allowing attendees to express themselves freely while respecting boundaries. Jeff shares anecdotes illustrating the blend of serendipity and intent that defines encounters, urging newcomers to engage without hesitation.
Global Perspectives and Accountability
Jeff broadens the view to international contexts, noting how varying educational systems influence entry into the field. In some regions, extended periods of exploration nurture creativity, contrasting with structured paths elsewhere. He celebrates the cultural embrace of setbacks as stepping stones, aligning with narratives of resilience.
To ensure trust, a code of conduct governs interactions, applicable universally. Enforcement through transparency reports holds organizers accountable, publicly detailing infractions to validate community standards. This mechanism reinforces integrity, even when confronting uncomfortable truths.
Jeff transitions to highlighting speakers like General Nakasone, whose insights demystify complex topics. Originating from efforts to verify online claims, these sessions connect attendees with authoritative voices, bridging gaps in understanding.
In closing, Jeff invites immersion, promising encounters that enrich beyond expectations.
Links:
[DefCon32] Counter Deception: Defending Yourself in a World Full of Lies
The digital age promised universal access to knowledge, yet it has evolved into a vast apparatus for misinformation. Tom Cross and Greg Conti examine this paradox, tracing deception’s roots from ancient stratagems to modern cyber threats. Drawing on military doctrines and infosec experiences, they articulate principles for crafting illusions and, crucially, for dismantling them. Their discourse empowers individuals to navigate an ecosystem where truth is obscured, fostering tools and mindsets to reclaim clarity.
Deception, at its essence, conceals reality to gain advantage, influencing decisions or inaction. Historical precedents abound: the Trojan Horse’s cunning infiltration, Civil War quaker guns mimicking artillery, or the Persian Gulf War’s feigned amphibious assault diverting attention from a land offensive. In contemporary conflicts, like Russia’s Ukraine invasion, fabricated narratives such as the “Ghost of Kyiv” bolster morale while masking intentions. These tactics transcend eras, targeting not only laypersons but experts, code, and emerging AI systems.
In cybersecurity, falsehoods manifest at every layer: spoofed signals in the electromagnetic spectrum, false flags in malware attribution, or fabricated personas for network access and influence propagation. Humans fall prey through phishing, typo-squatting, or mimicry, while specialists encounter deceptive metadata or rotating infrastructures. Malware detection evades scrutiny via polymorphism or fileless techniques, and AI succumbs to data poisoning or jailbreaks. Strategically, deception scales from tactical engagements to national objectives, concealing capabilities or projecting alternatives.
Maxims of Effective Deception
Military thinkers have distilled deception into enduring guidelines. Sun Tzu advocated knowing adversaries intimately while veiling one’s own plans, emphasizing preparation and adaptability. Von Clausewitz viewed war—and by extension, conflict—as enveloped in uncertainty, where illusions amplify fog. Modern doctrines, like those from the U.S. Joint Chiefs, outline six tenets: focus on key decision-makers, integration with operations, centralized control for consistency, timeliness to exploit windows, security to prevent leaks, and adaptability to evolving conditions.
These principles manifest in cyber realms. Attackers exploit cognitive biases—confirmation, anchoring, availability—embedding falsehoods in blind spots. Narratives craft compelling stories, leveraging emotions like fear or outrage to propagate. Coordination ensures unified messaging across channels, while adaptability counters defenses. In practice, state actors deploy bot networks for amplification, or cybercriminals use deepfakes for social engineering. Understanding these offensive strategies illuminates defensive countermeasures.
Inverting Principles for Countermeasures
Flipping offensive maxims yields defensive strategies. To counter focus, broaden information sources, triangulating across diverse perspectives to mitigate echo chambers. Against integration, scrutinize contexts: does a claim align with broader evidence? For centralized control, identify coordination patterns—sudden surges in similar messaging signal orchestration.
Timeliness demands vigilance during critical periods, like elections, where rushed judgments invite errors. Security’s inverse promotes transparency, fostering open verification. Adaptability encourages continuous learning, refining discernment amid shifting tactics.
Practically, countering biases involves self-awareness: question assumptions, seek disconfirming evidence. Triangulation cross-references claims against reliable outlets, fact-checkers, or archives. Detecting narratives entails pattern recognition—recurring themes, emotional triggers, or inconsistencies. Tools like reverse image searches or metadata analyzers expose fabrications.
Applying Counter Deception in Digital Ecosystems
The internet’s structure amplifies deceit, yet hackers’ ingenuity can reclaim agency. Social media, often ego-centric, distorts realities through algorithmic funhouse mirrors. Curating expert networks—via follows, endorsements—filters noise, prioritizing credible voices. Protocols for machine-readable endorsements, akin to LinkedIn but open, enable querying endorsed specialists on topics, surfacing informed commentary.
Innovative protocols like backlinks—envisioned by pioneers such as Vannevar Bush, Douglas Engelbart, and Ted Nelson—remain underexplored. These allow viewing inbound references, revealing critiques or extensions. Projects like Xanadu or Hyperscope hint at potentials: annotating documents with trusted overlays, highlighting recent edits for scrutiny. Content moderation challenges stymied widespread adoption, but coupling with decentralized systems like Mastodon offers paths forward.
Large language models (LLMs) present dual edges: prone to hallucinations, yet adept at structuring unstructured data. Dispassionate analysis could unearth omitted facts from narratives, or map expertise by parsing academic sites to link profiles. Defensive tools might flag biases or inconsistencies, augmenting human judgment per Engelbart’s augmentation ethos.
Scaling countermeasures involves education: embedding media literacy in curricula, emphasizing critical inquiry. Resources like Media Literacy Now provide K-12 frameworks, while frameworks like “48 Critical Thinking Questions” prompt probing—who benefits, where’s the origin? Hackers, adept at discerning falsehoods, can prototype tools—feed analyzers, narrative detectors—leveraging open protocols for innovation.
Ultimately, countering deception demands vigilance and creativity. By inverting offensive doctrines, individuals fortify perceptions, transforming the internet from a misinformation conduit into a truth-seeking engine.
Links:
EN_DEFCON32MainStageTalks_006_006.md
[DevoxxUK2024] Devoxx UK Introduces: Aspiring Speakers 2024, Short Talks
The Aspiring Speakers 2024 session at DevoxxUK2024, organized in collaboration with the London Java Community, showcased five emerging talents sharing fresh perspectives on technology and leadership. Rajani Rao explores serverless architectures, Yemurai Rabvukwa bridges chemistry and cybersecurity, Farhath Razzaque delves into AI-driven productivity, Manogna Machiraju tackles imposter syndrome in leadership, and Leena Mooneeram offers strategies for platform team synergy. Each 10-minute talk delivers actionable insights, reflecting the diversity and innovation within the tech community. This session highlights the power of new voices in shaping the future of software development.
Serverless Revolution with Rajani Rao
Rajani Rao, a principal technologist at Viva and founder of the Women Coding Community, presents a compelling case for serverless computing. Using a restaurant analogy—contrasting home cooking (traditional computing) with dining out (serverless)—Rajani illustrates how serverless eliminates infrastructure management, enhances scalability, and optimizes costs. She shares a real-world example of porting a REST API from Windows EC2 instances to AWS Lambda, handling 6 billion monthly requests. This shift, completed in a day, resolved issues like CPU overload and patching failures, freeing the team from maintenance burdens. The result was not only operational efficiency but also a monetized service, boosting revenue and team morale. Rajani advocates starting small with serverless to unlock creativity and improve developer well-being.
Chemistry Meets Cybersecurity with Yemurai Rabvukwa
Yemurai Rabvukwa, a cybersecurity engineer and TikTok content creator under STEM Bab, draws parallels between chemistry and cybersecurity. Her squiggly career path—from studying chemistry in China to pivoting to tech during a COVID-disrupted study abroad—highlights transferable skills like analytical thinking and problem-solving. Yemurai identifies three intersections: pharmaceuticals, healthcare, and energy. In pharmaceuticals, both fields use a prevent-detect-respond framework to safeguard systems and ensure quality. The 2017 WannaCry attack on the NHS underscores a multidisciplinary approach in healthcare, involving stakeholders to restore services. In energy, geopolitical risks and ransomware target renewable sectors, emphasizing cybersecurity’s critical role. Yemurai’s journey inspires leveraging diverse backgrounds to tackle complex tech challenges.
AI-Powered Productivity with Farhath Razzaque
Farhath Razzaque, a freelance full-stack engineer and AI enthusiast, explores how generative AI can transform developer productivity. Quoting DeepMind’s Demis Hassabis, Farhath emphasizes AI’s potential to accelerate innovation. He outlines five levels of AI adoption: zero-shot prompting for quick error resolution, AI apps like Cursor IDE for streamlined coding, prompt engineering for precise outputs, agentic workflows for collaborative AI agents, and custom solutions using frameworks like LangChain. Farhath highlights open-source tools like NoAI Browser and MakeReal, which rival commercial offerings at lower costs. By automating repetitive tasks and leveraging domain expertise, developers can achieve 10x productivity gains, preparing for an AI-driven future.
Overcoming Imposter Syndrome with Manogna Machiraju
Manogna Machiraju, head of engineering at Domestic & General, shares a candid exploration of imposter syndrome in leadership roles. Drawing from her 2017 promotion to engineering manager, Manogna recounts overworking to prove her worth, only to face project failure and team burnout. This prompted reflection on her role’s expectations, realizing she wasn’t meant to code but to enable her team. She advocates building clarity before acting, appreciating team efforts, and embracing tolerable imperfection. Manogna also addresses the challenge of not being the expert in senior roles, encouraging curiosity and authenticity over faking expertise. Her principle—leaning into discomfort with determination—offers a roadmap for navigating leadership doubts.
Platform Happiness with Leena Mooneeram
Leena Mooneeram, a platform engineer at Chainalysis, presents a developer’s guide to platform happiness, emphasizing mutual engagement between engineers and platform teams. Viewing platforms as products, Leena suggests three actions: be an early adopter to shape tools and build relationships, contribute by fixing documentation or small bugs, and question considerately with context and urgency details. These steps enhance platform robustness and reduce friction. For instance, early adopters provide critical feedback, while contributions like PRs for typos streamline workflows. Leena’s mutual engagement model fosters collaboration, ensuring platforms empower engineers to build software joyfully and efficiently.