Recent Posts
Archives

PostHeaderIcon [DefCon32] Windows Downdate: Downgrade Attacks Using Windows Updates

The notion of a “fully patched” system crumbles under the weight of downgrade attacks, as revealed by Alon Leviev, a self-taught security researcher at SafeBreach. His exploration of Windows Updates uncovers a flaw allowing attackers to revert critical components—DLLs, drivers, kernels, and virtualization stacks—to vulnerable versions, bypassing verification and exposing privilege escalations. Alon’s tool, Windows Downdate, renders the term “updated” obsolete, compromising systems worldwide.

Alon, a former Brazilian Jiu-Jitsu champion, leverages his expertise in OS internals and reverse engineering to dissect Windows Update mechanisms. Inspired by the BlackLotus UEFI bootkit, which bypassed Secure Boot via downgrades, he investigates whether similar vulnerabilities plague other components. His findings reveal a systemic design flaw, enabling unprivileged attackers to manipulate updates and disable protections like Virtualization-Based Security (VBS).

The implications are profound: downgraded systems report as fully updated, evade recovery tools, and block future patches, leaving them exposed to thousands of known vulnerabilities.

BlackLotus and the Downgrade Threat

Alon traces the research to BlackLotus, which exploited a patched Secure Boot flaw by reverting components. Secure Boot verifies boot chain signatures, but BlackLotus’s downgrade bypassed this, prompting Alon to probe Windows Updates for similar weaknesses.

He discovers that update packages, lacking robust validation, allow crafted downgrades. By manipulating update manifests, attackers revert critical files, exploiting old vulnerabilities without triggering alerts.

Compromising the Virtualization Stack

Targeting Hyper-V, Secure Kernel, and Credential Guard, Alon achieves downgrades that expose privilege escalations. VBS, designed to isolate sensitive operations, relies on UEFI locks, yet his methods disable these protections, a first in known research.

The attack exploits design flaws allowing less privileged rings to update higher ones, a remnant since VBS’s 2015 debut. Demonstrations show downgraded hypervisors, undermining Windows’ security architecture.

Restoration Vulnerabilities

A secondary flaw in update restoration scenarios amplifies the threat. Unprivileged users can trigger rollbacks, embedding malicious updates that persist across reboots. Recovery tools fail to detect these, as the system registers as compliant.

Alon’s Windows Downdate tool automates this, crafting updates that downgrade entire systems, from drivers to kernels, without administrative rights.

Industry Implications and Mitigations

The research exposes a gap in downgrade attack awareness. Alon urges thorough design reviews, emphasizing that unexamined surfaces, like update mechanisms, harbor risks. Linux and macOS may face similar threats, necessitating preemptive scrutiny.

Mitigations include enhanced validation, privilege restrictions, and monitoring for anomalous updates. His findings, shared responsibly with Microsoft, highlight the need for systemic changes to restore trust in patching.

Links:

PostHeaderIcon [DevoxxBE2024] Java Language Futures by Gavin Bierman

Gavin Bierman, from Oracle’s Java Platform Group, captivated attendees at Devoxx Belgium 2024 with a forward-looking talk on Java’s evolution under Project Amber. Focusing on productivity-oriented language features, Gavin outlined recent additions like records, sealed classes, and pattern matching, while previewing upcoming enhancements like simplified main methods and flexible constructor bodies. His session illuminated Java’s design philosophy—prioritizing readability, explicit programmer intent, and compatibility—while showcasing how these features enable modern, data-oriented programming paradigms suited for today’s microservices architectures.

Project Amber’s Mission: Productivity and Intent

Gavin introduced Project Amber as a vehicle for delivering smaller, productivity-focused Java features, leveraging the six-month JDK release cadence to preview and finalize enhancements. Unlike superficial syntax changes, Amber emphasizes exposing programmer intent to improve code readability and reduce bugs. Compatibility is paramount, with breaking changes minimized, as Java evolves to address modern challenges distinct from its 1995 origins. Gavin highlighted how features like records and sealed classes make intent explicit, enabling the compiler to enforce constraints and provide better error checking, aligning with the needs of contemporary applications.

Records: Simplifying Data Carriers

Records, introduced to streamline data carrier classes, were a key focus. Gavin demonstrated how a Point class with two integers requires verbose boilerplate (constructors, getters, equals, hashCode) that obscures intent. Records (record Point(int x, int y)) eliminate this by auto-generating a canonical constructor, accessor methods, and value-based equality, ensuring immutability and transparency. This explicitness allows the compiler to enforce a contract: constructing a record from its components yields an equal instance. Records also support deserialization via the canonical constructor, ensuring domain-specific constraints, making them safer than traditional classes.

Sealed Classes and Pattern Matching

Sealed classes, shipped in JDK 17, allow developers to restrict class hierarchies explicitly. Gavin showed a Shape interface sealed to permit only Circle and Rectangle implementations, preventing unintended subclasses at compile or runtime. This clarity enhances library design by defining precise interfaces. Pattern matching, enhanced in JDK 21, further refines this by enabling type patterns and record patterns in instanceof and switch statements. For example, a switch over a sealed Shape interface requires exhaustive cases, eliminating default clauses and reducing errors. Nested record patterns allow sophisticated data queries, handling nulls safely without exceptions.

Data-Oriented Programming with Amber Features

Gavin illustrated how records, sealed classes, and pattern matching combine to support data-oriented programming, ideal for microservices exchanging pure data. He reimagined the Future class’s get method, traditionally complex due to multiple control paths (success, failure, timeout, interruption). By modeling the return type as a sealed AsyncReturn interface with four record implementations (Success, Failure, Timeout, Interrupted), and using pattern matching in a switch, developers handle all cases uniformly. This approach simplifies control flow, ensures exhaustiveness, and leverages Java’s type safety, contrasting with error-prone exception handling in traditional designs.

Future Features: Simplifying Java for All

Looking ahead, Gavin previewed features in JDK 23 and beyond. Simplified main methods allow beginners to write void main() without boilerplate, reducing cognitive load while maintaining full Java compatibility. The with expression for records enables concise updates (e.g., doubling a component) without redundant constructor calls, preserving domain constraints. Flexible constructor bodies (JEP 482) relax top-down initialization, allowing pre-super call logic to validate inputs, addressing issues like premature field access in subclass constructors. Upcoming enhancements include patterns for arbitrary classes, safe template programming, and array pattern matching, promising further productivity gains.

Links:

PostHeaderIcon [DefCon32] Your AI Assistant Has a Big Mouth: A New Side-Channel Attack

As AI assistants like ChatGPT reshape human-technology interactions, their security gaps pose alarming risks. Yisroel Mirsky, a Zuckerman Faculty Scholar at Ben-Gurion University, alongside graduate students Daniel Eisenstein and Roy Weiss, unveils a novel side-channel attack exploiting token length in encrypted AI responses. Their research exposes vulnerabilities in major platforms, including OpenAI, Microsoft, and Cloudflare, threatening the confidentiality of personal and sensitive communications.

Yisroel’s Offensive AI Research Lab focuses on adversarial techniques, and this discovery highlights how subtle data leaks can undermine encryption. By analyzing network traffic, they intercept encrypted responses, reconstructing conversations from medical queries to document edits. Their findings, disclosed responsibly, prompted swift vendor patches, underscoring the urgency of securing AI integrations.

The attack leverages predictable token lengths in JSON responses, allowing adversaries to infer content despite encryption. Demonstrations reveal real-world impacts, from exposing personal advice to compromising corporate data, urging a reevaluation of AI security practices.

Understanding the Side-Channel Vulnerability

Yisroel explains the attack’s mechanics: AI assistants transmit responses as JSON objects, with token lengths correlating to content size. By sniffing HTTPS traffic, attackers deduce these lengths, mapping them to probable outputs. For instance, a query about a medical rash yields distinct packet sizes, enabling reconstruction.

Vulnerable vendors, unaware of this flaw until February 2025, included OpenAI and Quora. The team’s tool, GPTQ Logger, automates traffic analysis, highlighting the ease of exploitation in unpatched systems.

Vendor Responses and Mitigations

Post-disclosure, vendors acted decisively. OpenAI implemented padding to the nearest 32-byte value, obscuring token lengths. Cloudflare adopted random padding, further disrupting patterns. By March 2025, patches neutralized the threat, with five vendors offering bug bounties.

Yisroel emphasizes simple defenses: random padding, fixed-size packets, or increased buffering. These measures, easily implemented, prevent length-based inference, safeguarding user privacy.

Implications for AI Security

The discovery underscores a broader issue: AI services, despite their sophistication, inherit historical encryption pitfalls. Yisroel draws parallels to past side-channel attacks, where minor details like timing betrayed secrets. AI’s integration into sensitive domains demands rigorous security, akin to traditional software.

The work encourages offensive research to uncover similar weaknesses, advocating AI’s dual role in identifying and mitigating vulnerabilities. As new services emerge, proactive design is critical to prevent data exposure.

Broader Call to Action

Yisroel’s team urges the community to explore additional side channels, from compression ratios to processing delays. Their open-source tools invite further scrutiny, fostering a collaborative defense against evolving threats.

This research redefines AI assistant security, emphasizing meticulous data handling to protect user trust.

Links:

PostHeaderIcon [DevoxxBE2024] Performance-Oriented Spring Data JPA & Hibernate by Maciej Walkowiak

At Devoxx Belgium 2024, Maciej Walkowiak delivered a compelling session on optimizing Spring Data JPA and Hibernate for performance, a critical topic given Hibernate’s ubiquity and polarizing reputation in Java development. With a focus on practical solutions, Maciej shared insights from his extensive consulting experience, addressing common performance pitfalls such as poor connection management, excessive queries, and the notorious N+1 problem. Through live demos and code puzzles, he demonstrated how to configure Hibernate and Spring Data JPA effectively, ensuring applications remain responsive and scalable. His talk emphasized proactive performance tuning during development to avoid production bottlenecks.

Why Applications Slow Down

Maciej opened by debunking myths about why applications lag, dismissing outdated notions that Java or databases are inherently slow. Instead, he pinpointed the root cause: misuse of technologies like Hibernate. Common issues include poor database connection management, which can halt applications, and issuing excessive or slow queries due to improper JPA mappings or over-fetching data. Maciej stressed the importance of monitoring tools like DataDog APM, which revealed thousands of queries in a single HTTP request in one of his projects, taking over 7 seconds. He urged developers to avoid guessing and use tracing tools or SQL logging to identify issues early, ideally during testing with tools like Digma’s IntelliJ plugin.

Optimizing Database Connection Management

Effective connection management is crucial for performance. Maciej explained that establishing database connections is costly due to network latency and authentication overhead, especially in PostgreSQL, where each connection spawns a new OS process. Connection pools, standardized in Spring Boot, mitigate this by creating a fixed number of connections (default: 10) at startup. However, developers must ensure connections are released promptly to avoid exhaustion. Using FlexyPool and Spring Boot Data Source Decorator, Maciej demonstrated logging connection acquisition and release times. In one demo, a transactional method unnecessarily held a connection for 273 milliseconds due to an external HTTP call within the transaction. Disabling spring.jpa.open-in-view reduced this to 61 milliseconds, freeing the connection after the transaction completed.

Transaction Management for Efficiency

Maciej highlighted the pitfalls of default transaction settings and nested transactions. By default, Spring Boot’s auto-commit mode triggers commits after each database interaction, but disabling it (spring.datasource.auto-commit=false) delays connection acquisition until the first database interaction, reducing connection hold times. For complex workflows, he showcased the TransactionTemplate for programmatic transaction management, allowing developers to define transaction boundaries within a method without creating artificial service layers. This approach avoids issues with @Transactional(propagation = Propagation.REQUIRES_NEW), which can occupy multiple connections unnecessarily, as seen in a demo where nested transactions doubled connection usage, risking pool exhaustion.

Solving the N+1 Problem and Over-Fetching

The N+1 query problem, a common Hibernate performance killer, occurs when lazy-loaded relationships trigger additional queries per entity. In a banking application demo, Maciej showed a use case where fetching bank transfers by sender ID resulted in multiple queries due to eager fetching of related accounts. By switching @ManyToOne mappings to FetchType.LAZY and using explicit JOIN FETCH in custom JPQL queries, he reduced queries to a single, efficient one. Additionally, he addressed over-fetching by using getReferenceById() instead of findById(), avoiding unnecessary queries when only entity references are needed, and introduced the @DynamicUpdate annotation to update only changed fields, optimizing updates for large tables.

Projections and Tools for Long-Term Performance

For read-heavy operations, Maciej advocated using projections to fetch only necessary data, avoiding the overhead of full entity loading. Spring Data JPA supports projections via records or interfaces, automatically generating queries based on method names or custom JPQL. Dynamic projections further simplify repositories by allowing runtime specification of return types. To maintain performance, he recommended tools like Hypersistence Optimizer (a commercial tool by Vlad Mihalcea) and QuickPerf (an open-source library, though unmaintained) to enforce query expectations in tests. These tools help prevent regressions, ensuring optimizations persist despite team changes or project evolution.

Links:

PostHeaderIcon [Scala IO Paris 2024] Calculating Is Funnier Than Guessing

In the ScalaIO Paris 2024 session “Calculating is funnier than guessing”, Regis Kuckaertz, a French developer living in an English-speaking country captivated the audience with a methodical approach to writing compilers for domain-specific languages (DSLs) in Scala. The talk debunked the mystique of compiler construction, emphasizing a principled, calculation-based process over ad-hoc guesswork. Using equational reasoning and structural induction, the speaker derived a compiler and stack machine for a simple boolean expression language, Expr, and extended the approach to the more complex ZPure datatype from the ZIO Prelude library. The result was a correct-by-construction compiler, offering performance gains over interpreters while remaining accessible to functional programmers.

Laying the Foundation with Equational Reasoning

The talk began by highlighting the limitations of interpreters for DSLs, which, while easy to write via structural induction, incur runtime overhead. The speaker argued that functional programming’s strength lies in embedding DSLs, citing examples like Cats Effect, ZIO, and Kulo for metrics. To achieve “abstraction without remorse,” DSLs must be compiled into efficient machine code. The proposed method, inspired by historical work on calculating compilers, avoids pre-made recipes, instead using a single-step derivation process combining evaluation, continuation-passing style (CPS), and defunctionalization.

For the Expr language, comprising boolean constants, negation, and conjunction, the speaker defined a denotational semantics with an evaluator function. This function maps expressions to boolean values, e.g., evaluating And(Not(B(true)), B(false)) to a boolean result. The evaluator was refined to make implicit behaviors explicit, such as Scala’s left-to-right evaluation of &&, ensuring the specification aligns with developer expectations. This step underscored the importance of intimate familiarity with execution details, uncovered through the derivation process.

Deriving a Compiler for Expr

The core of the talk was deriving a compiler and stack machine for Expr using equational reasoning. The correctness specification required that compiling an expression and executing it on a stack yields the same result as evaluating the expression and pushing it onto the stack. The compiler was defined with a helper function using symbolic CPS, taking a continuation to guide code generation. For each constructor—B (boolean), Not, and And—the speaker applied the specification, reducing expressions step-by-step.

For B, a Push instruction was introduced to place a boolean on the stack. For Not, a Neg instruction negated the top stack value, with the subexpression compiled inductively. For And, the derivation distributed stack operations over conditional branches, introducing an If instruction to select continuations based on a boolean. The final Compile function used a Halt continuation to stop execution. The resulting machine language and stack machine, implemented as an imperative tail-recursive loop, fit on a single slide, achieving orders-of-magnitude performance improvements over the interpreter.

Tackling Complexity with ZPure

To demonstrate real-world applicability, the speaker applied the technique to ZPure, a datatype from ZIO Prelude for pure computations with state, logging, and error handling. The language includes constructors for pure values, failures, error handling, state management, logging, and flat mapping. The evaluator threads state and logs, succeeding or failing based on the computation. The compiler derivation followed the same process, introducing instructions like PushThrowLoadStoreLogMarkUnmark, and Call to handle values, errors, state, and continuations.

The derivation for ZPure required careful handling of failures, where a Throw instruction invokes a failure routine that unwinds the stack until it finds a handler or crashes. For Catch and FlatMap, the speaker applied the induction hypothesis, introducing stack markers to manage handlers and continuations. Despite Scala functions in ZPure requiring runtime compilation, the speaker proposed defunctionalization—using data types like Flow or lambda calculus encodings—to eliminate this, though this was left as future work. The resulting compiler and machine, again fitting on a slide, were correct by construction, with unreachable cases confidently excluded.

Reflections and Future Directions

The talk emphasized that calculating compilers is a mechanical, repeatable process, not a mysterious art. By deriving machine instructions through equational reasoning, developers ensure correctness without extensive unit testing. The speaker noted a limitation in ZPure: its evaluator and compiler allow non-terminating expressions, which a partial monad could address. Future work includes defunctionalizing ZPure to avoid runtime compilation and optimizing machine code into directed acyclic graphs to reduce duplication.

The speaker recommended resources like Philip Wadler’s papers on calculating compilers, encouraging functional programmers to explore this approachable technique. The talk, blending humor with rigor, demonstrated that compiling DSLs is not only feasible but also “funnier” than guessing, offering a path to efficient, correct code.

Hashtags: #Scala #CompilerDesign #EquationalReasoning #ZPure #ScalaIOParis2024 #FunctionalProgramming

PostHeaderIcon [DefCon32] 1 for All, All for WHAD: Wireless Shenanigans Made Easy

In the ever-evolving landscape of wireless security, the proliferation of bespoke tools for protocol attacks creates a fragmented ecosystem. Romain Cayre and Damien Cauquil, seasoned researchers from Quarkslab, introduce WHAD, a unifying framework designed to streamline wireless hacking. By offering a standardized host/device communication protocol, WHAD enhances interoperability across diverse hardware, liberating researchers from the constraints of proprietary firmware. Their presentation unveils a solution that fosters collaboration and innovation, making wireless exploits more accessible and sustainable.

Romain, maintainer of the Mirage tool for Bluetooth and beyond, and Damien, creator of BtleJack, share a passion for dissecting wireless protocols. Their work addresses a critical pain point: the reliance on specialized, often obsolete hardware for attacks on smartphones, peripherals, and vehicles. WHAD consolidates these efforts, supporting protocols like Bluetooth Low Energy (BLE), Zigbee, and Logitech Unifying, while enabling researchers to focus on exploits rather than hardware compatibility.

The framework’s extensible architecture allows seamless integration with devices like Nordic nRF boards, ensuring longevity as hardware evolves. By presenting WHAD’s capabilities through practical demonstrations, Romain and Damien showcase its potential to transform wireless security research.

The Problem with Wireless Tools

Wireless security tools, while effective, often tie researchers to specific hardware and custom protocols. Damien highlights the chaos of tools like BtleJack, Mirage, and GATTacker, each requiring unique firmware and communication methods. This fragmentation forces researchers to reinvent protocols, limiting scalability and accessibility.

WHAD addresses this by providing a unified protocol stack, abstracting hardware complexities. It supports multiple devices through a single interface, reducing the need for redundant development. For instance, a researcher targeting BLE can use WHAD with any compatible dongle, avoiding the need to craft bespoke firmware.

WHAD’s Architecture and Capabilities

Romain details WHAD’s modular design, comprising a host-side Python library and device-side firmware. The framework supports sniffing, injection, and interaction across protocols. Demonstrations include BLE relay attacks, where WHAD discovers services and manipulates devices like smart bulbs, altering colors or states.

Its flexibility extends to hardware CTFs, with WHAD emulating BLE challenges and LoRa gateways. Integration with tools like Scapy enhances packet manipulation, while firmware availability on GitHub encourages community contributions.

Real-World Applications and Impact

Damien shares WHAD’s internal use at Quarkslab, where it facilitated a BLE GATT fuzzer, uncovering CVEs in expressive controllers. Research into screaming channel attacks leveraged WHAD to instrument custom link-layer traffic, showcasing its versatility.

The framework’s open-source release, available via PyPI and GitHub, invites contributions for new protocols and hardware support. Romain emphasizes its role in democratizing wireless research, reducing barriers for newcomers and veterans alike.

Future Potential and Community Engagement

WHAD’s vision extends beyond current protocols, with plans to incorporate emerging standards. By fostering a collaborative ecosystem, Romain and Damien aim to unify disparate tools, ensuring resilience against hardware obsolescence.

Their call for contributors underscores the community-driven ethos, encouraging bug reports, documentation, and firmware development. WHAD’s potential lies in its adaptability, empowering researchers to explore new attack surfaces efficiently.

Links:

PostHeaderIcon [DefCon32] MaLDAPtive: Obfuscation and De-Obfuscation

Directory services, foundational to enterprise security, harbor overlooked evasion potentials. Daniel Bohannon and Sabajete Elezaj unveil MaLDAPtive, a framework born from exhaustive LDAP research. Daniel, a principal threat researcher at Permiso Security, and Sabajete, a senior cyber security engineer at Solaris SE, dissect obfuscation techniques across LDAP elements, empowering both attackers and defenders.

Their journey traces Active Directory’s evolution since 2000, intertwined with LDAP’s protocol roots from the 1980s. Tools like BloodHound amplified LDAP’s offensive utility, yet detection lags, often signature-bound in costly solutions.

MaLDAPtive, a 2,000-hour endeavor, features a custom tokenizer and parser, enabling unprecedented obfuscation and de-obfuscation. They categorize techniques: distinguished name manipulations via encodings, attribute tricks with wildcards, and filter obfuscations leveraging operators.

Historical Context and LDAP Components

Daniel recounts LDAP’s standardization in 1993, with Active Directory adopting it in 2000. Queries comprise bases, scopes, filters—ripe for evasion.

Distinguished names (DNs) encode via UTF-8, hex, or escapes, bloating logs. Attributes exploit aliases like “cn” for “name,” while filters layer parentheses and negations.

Their parser tokenizes queries, revealing incompatibilities undocumented elsewhere.

Advanced Obfuscation Techniques

Sabajete details filter intricacies: extensible matches with OIDs, reversing attributes for efficiency. They uncover zero-padding in OIDs, undocumented wildcards in values.

Tool-generated examples expose anomalies, like hex encoding bans in certain filters. MaLDAPtive automates these, generating evasive queries while preserving semantics.

Defensively, de-obfuscation normalizes queries, aiding detection. They critique static signatures, advocating behavioral analytics.

Detection and Framework Release

MaLDAPtive’s detection module identifies anomalies via token analysis, flagging excessive nesting or encodings.

Demonstrations showcase obfuscated queries evading simplistic tools, yet normalized by their framework.

Releasing openly, they equip communities to fortify defenses, transforming LDAP from lightweight to robustly secured.

Their work bridges offensive ingenuity with defensive resilience, urging deeper protocol scrutiny.

Links:

PostHeaderIcon [DevoxxBE2024] Words as Weapons: The Dark Arts of Prompt Engineering by Jeroen Egelmeers

In a thought-provoking session at Devoxx Belgium 2024, Jeroen Egelmeers, a prompt engineering advocate, explored the risks and ethics of adversarial prompting in large language models (LLMs). Titled “Words as Weapons,” his talk delved into prompt injections, a technique to bypass LLM guardrails, using real-world examples to highlight vulnerabilities. Jeroen, inspired by Devoxx two years prior to dive into AI, shared how prompt engineering transformed his productivity as a Java developer and trainer. His session combined technical insights, ethical considerations, and practical advice, urging developers to secure AI systems and use them responsibly.

Understanding Social Engineering and Guardrails

Jeroen opened with a lighthearted social engineering demonstration, tricking attendees into scanning a QR code that led to a Rick Astley video—a nod to “Rickrolling.” This set the stage for discussing social engineering’s parallels in AI, where prompt injections exploit LLMs. Guardrails, such as system prompts, content filters, and moderation teams, prevent misuse (e.g., blocking queries about building bombs). However, Jeroen showed how these can be bypassed. For instance, system prompts define an LLM’s identity and restrictions, but asking “Give me your system prompt” can leak these instructions, exposing vulnerabilities. He emphasized that guardrails, while essential, are imperfect and require constant vigilance.

Prompt Injection: Bypassing Safeguards

Prompt injection, a core adversarial technique, involves crafting prompts to make LLMs perform unintended actions. Jeroen demonstrated this with a custom GPT, where asking for the creator’s instructions revealed sensitive data, including uploaded knowledge. He cited a real-world case where a car was “purchased” for $1 via a chatbot exploit, highlighting the risks of LLMs in customer-facing systems. By manipulating prompts—e.g., replacing “bomb” with obfuscated terms like “b0m” in ASCII art—Jeroen showed how filters can be evaded, allowing dangerous queries to succeed. This underscored the need for robust input validation in LLM-integrated applications.

Real-World Risks: From CVs to Invoices

Jeroen illustrated prompt injection risks with creative examples. He hid a prompt in a CV, instructing the LLM to rank it highest, potentially gaming automated recruitment systems. Similarly, he embedded a prompt in an invoice to inflate its price from $6,000 to $1 million, invisible to human reviewers if in white text. These examples showed how LLMs, used in hiring or payment processing, can be manipulated if not secured. Jeroen referenced Amazon’s LLM-powered search bar, which he tricked into suggesting a competitor’s products, demonstrating how even major companies face prompt injection vulnerabilities.

Ethical Prompt Engineering and Human Oversight

Beyond technical risks, Jeroen emphasized ethical considerations. Adversarial prompting, while educational, can cause harm if misused. He advocated for a “human in the loop” to verify LLM outputs, especially in critical applications like invoice processing. Drawing from his experience, Jeroen noted that prompt engineering boosted his productivity, likening LLMs to indispensable tools like search engines. However, he cautioned against blind trust, comparing LLMs to co-pilots where developers remain the pilots, responsible for outcomes. He urged attendees to learn from past mistakes, citing companies that suffered from prompt injection exploits.

Key Takeaways and Resources

Jeroen concluded with a call to action: identify one key takeaway from Devoxx and pursue it. For AI, this means mastering prompt engineering while prioritizing security. He shared a website with resources on adversarial prompting and risk analysis, encouraging developers to build secure AI systems. His talk blended humor, technical depth, and ethical reflection, leaving attendees with a clear understanding of prompt injection risks and the importance of responsible AI use.

Links:

PostHeaderIcon [AWS Summit Paris 2024] Winning Fundraising Strategies for 2024

The AWS Summit Paris 2024 session “Levée de fonds en 2024 – Les stratégies gagnantes” (SUP112-FR) offered a 29-minute panel with investors sharing insights on startup fundraising. Anaïs Monlong (Iris Capital), Audrey Soussan (Ventech), and Samantha Jérusalmy (Elaia) discussed market trends, investor expectations, and pitching tips for early-stage startups. With European VC funding down 40% to €45B in 2023 (2024 Atomico), this post outlines strategies to secure funding in 2024.

2024 Fundraising Market

Samantha Jérusalmy described 2024 as challenging post-2021 bubble, with investors prioritizing profitability (80% of VCs, 2024 PitchBook). However, Audrey Soussan highlighted ample liquidity, with early-stage deals (Seed/Series A) making up 60% of EU funding in 2023 (2024 Dealroom). Anaïs Monlong noted a tripling of VC assets in five years, driven by corporate interest in tech, especially AI (€10B raised in 2023, 2024 Sifted). Sectors like cloud-enabled industries and data utilization remain attractive.

Investor Expectations

Samantha explained VC business models: funds (e.g., €150–250M) seek 10–30% stakes, aiming for exits at €1B+ to return multiples (2–5x). A €1B exit with 10% yields €100M, insufficient for a €200M fund without multiple “unicorns.” Investors need billion-euro addressable markets. Audrey advised Seed startups to show €50K monthly revenue or design partners, while Series A requires recurring revenue. Anaïs emphasized strong tech cores (e.g., Shi Technology, Exotec) for industrial transformation.

Pitching Best Practices

Anaïs recommended concise pitch decks: market size, product screenshots, team background. Avoid premature valuation claims, as pricing varies widely. Target one fund contact to ensure follow-up, leveraging their sector fit. Audrey suggested sizing rounds for 18–24 months at the lower end, adjusting upward if oversubscribed. Overshooting (e.g., €5M to €1M) signals weakness. Samantha stressed pre-pitch fund alignment, avoiding large funds for sub-€1B markets.

Valuation Strategies

Samantha likened valuation to a “marriage,” advising entrepreneurs to build rapport before discussing terms. Audrey urged creating competition among investors to optimize valuation, but warned high valuations risk harsh terms or down rounds (lower valuations in later rounds). Anaïs clarified valuations aren’t discounted cash flow-based but market-driven, aligning with recent deals. All advised balancing valuation with investor value-add and long-term equity story to avoid Series A/B traps.

PostHeaderIcon [DefCon32] Open Sesame: How Vulnerable Is Your Stuff in Electronic Lockers?

In environments where physical security intersects with digital convenience, electronic lockers promise safeguard yet often deliver fragility. Dennis Giese and Braelynn, independent security researchers, scrutinize smart locks from Digilock and Schulte-Schlagbaum AG (SAG), revealing exploitable weaknesses. Their analysis spans offices, hospitals, and gyms, where rising hybrid work amplifies reliance on shared storage. By demonstrating physical and side-channel attacks, they expose why trusting these devices with valuables or sensitive data invites peril.

Dennis, focused on embedded systems and IoT like vacuum robots, and Braelynn, specializing in application security with ventures into hardware, collaborate to dissect these “keyless” solutions. Marketed as leaders in physical security, these vendors’ products falter under scrutiny, succumbing to firmware extractions and key emulations.

Lockers, equipped with PIN pads and RFID readers, store laptops, phones, and documents. Users input codes or tap cards, assuming protection. Yet, attackers extract master keys from one unit, compromising entire installations. Side-channel methods, like power analysis, recover PINs without traces.

Firmware Extraction and Key Cloning

Dennis and Braelynn detail extracting firmware via JTAG or UART, bypassing protections on microcontrollers like AVR or STM32. Tools like Flipper Zero emulate RFID, cloning credentials cheaply. SAG’s locks yield to voltage glitching, dumping EEPROM contents including master codes.

Digilock’s vulnerabilities allow manager key retrieval, granting universal access. They highlight reusing PINs across devices—phones, cards, lockers—as a critical error, enabling cross-compromise.

Comparisons with competitors like Ojmar reveal similar issues: unencrypted storage, weak obfuscation. Attacks require basic tools, underscoring development oversights.

Side-Channel and Physical Attacks

Beyond digital, physical vectors prevail. Power consumption during PIN entry leaks digits via oscilloscopes, recovering codes swiftly. RFID sniffing captures credentials mid-use.

They address a cease-and-desist from Digilock, withdrawn post-legal aid from EFF, emphasizing disclosure challenges. Despite claims of security, these locks lack military-grade assurances, sold as standard solutions.

Mitigations include enabling code protection, though impractical for legacy units. Firmware updates are rare, leaving replacement or ignorance as options.

Lessons for Enhanced Security

Dennis and Braelynn advocate security-by-design: encrypt secrets, anticipate attacks. Users should treat locker PINs uniquely, avoid loaning keys, and recognize limitations.

Their findings illuminate cyber-physical risks, urging vigilance around everyday systems. Big firms err too; development trumps breaking in complexity.

Encouraging ethical exploration, they remind that “unhacked” claims invite scrutiny.

Links: