Archive for the ‘en-US’ Category
[AWS Summit Paris 2024] Winning Fundraising Strategies for 2024
The AWS Summit Paris 2024 session “Levée de fonds en 2024 – Les stratégies gagnantes” (SUP112-FR) offered a 29-minute panel with investors sharing insights on startup fundraising. Anaïs Monlong (Iris Capital), Audrey Soussan (Ventech), and Samantha Jérusalmy (Elaia) discussed market trends, investor expectations, and pitching tips for early-stage startups. With European VC funding down 40% to €45B in 2023 (2024 Atomico), this post outlines strategies to secure funding in 2024.
2024 Fundraising Market
Samantha Jérusalmy described 2024 as challenging post-2021 bubble, with investors prioritizing profitability (80% of VCs, 2024 PitchBook). However, Audrey Soussan highlighted ample liquidity, with early-stage deals (Seed/Series A) making up 60% of EU funding in 2023 (2024 Dealroom). Anaïs Monlong noted a tripling of VC assets in five years, driven by corporate interest in tech, especially AI (€10B raised in 2023, 2024 Sifted). Sectors like cloud-enabled industries and data utilization remain attractive.
Investor Expectations
Samantha explained VC business models: funds (e.g., €150–250M) seek 10–30% stakes, aiming for exits at €1B+ to return multiples (2–5x). A €1B exit with 10% yields €100M, insufficient for a €200M fund without multiple “unicorns.” Investors need billion-euro addressable markets. Audrey advised Seed startups to show €50K monthly revenue or design partners, while Series A requires recurring revenue. Anaïs emphasized strong tech cores (e.g., Shi Technology, Exotec) for industrial transformation.
Pitching Best Practices
Anaïs recommended concise pitch decks: market size, product screenshots, team background. Avoid premature valuation claims, as pricing varies widely. Target one fund contact to ensure follow-up, leveraging their sector fit. Audrey suggested sizing rounds for 18–24 months at the lower end, adjusting upward if oversubscribed. Overshooting (e.g., €5M to €1M) signals weakness. Samantha stressed pre-pitch fund alignment, avoiding large funds for sub-€1B markets.
Valuation Strategies
Samantha likened valuation to a “marriage,” advising entrepreneurs to build rapport before discussing terms. Audrey urged creating competition among investors to optimize valuation, but warned high valuations risk harsh terms or down rounds (lower valuations in later rounds). Anaïs clarified valuations aren’t discounted cash flow-based but market-driven, aligning with recent deals. All advised balancing valuation with investor value-add and long-term equity story to avoid Series A/B traps.
Links:
[DefCon32] Open Sesame: How Vulnerable Is Your Stuff in Electronic Lockers?
In environments where physical security intersects with digital convenience, electronic lockers promise safeguard yet often deliver fragility. Dennis Giese and Braelynn, independent security researchers, scrutinize smart locks from Digilock and Schulte-Schlagbaum AG (SAG), revealing exploitable weaknesses. Their analysis spans offices, hospitals, and gyms, where rising hybrid work amplifies reliance on shared storage. By demonstrating physical and side-channel attacks, they expose why trusting these devices with valuables or sensitive data invites peril.
Dennis, focused on embedded systems and IoT like vacuum robots, and Braelynn, specializing in application security with ventures into hardware, collaborate to dissect these “keyless” solutions. Marketed as leaders in physical security, these vendors’ products falter under scrutiny, succumbing to firmware extractions and key emulations.
Lockers, equipped with PIN pads and RFID readers, store laptops, phones, and documents. Users input codes or tap cards, assuming protection. Yet, attackers extract master keys from one unit, compromising entire installations. Side-channel methods, like power analysis, recover PINs without traces.
Firmware Extraction and Key Cloning
Dennis and Braelynn detail extracting firmware via JTAG or UART, bypassing protections on microcontrollers like AVR or STM32. Tools like Flipper Zero emulate RFID, cloning credentials cheaply. SAG’s locks yield to voltage glitching, dumping EEPROM contents including master codes.
Digilock’s vulnerabilities allow manager key retrieval, granting universal access. They highlight reusing PINs across devices—phones, cards, lockers—as a critical error, enabling cross-compromise.
Comparisons with competitors like Ojmar reveal similar issues: unencrypted storage, weak obfuscation. Attacks require basic tools, underscoring development oversights.
Side-Channel and Physical Attacks
Beyond digital, physical vectors prevail. Power consumption during PIN entry leaks digits via oscilloscopes, recovering codes swiftly. RFID sniffing captures credentials mid-use.
They address a cease-and-desist from Digilock, withdrawn post-legal aid from EFF, emphasizing disclosure challenges. Despite claims of security, these locks lack military-grade assurances, sold as standard solutions.
Mitigations include enabling code protection, though impractical for legacy units. Firmware updates are rare, leaving replacement or ignorance as options.
Lessons for Enhanced Security
Dennis and Braelynn advocate security-by-design: encrypt secrets, anticipate attacks. Users should treat locker PINs uniquely, avoid loaning keys, and recognize limitations.
Their findings illuminate cyber-physical risks, urging vigilance around everyday systems. Big firms err too; development trumps breaking in complexity.
Encouraging ethical exploration, they remind that “unhacked” claims invite scrutiny.
Links:
[DevoxxUK2024] How We Decide by Andrew Harmel-Law
Andrew Harmel-Law, a Tech Principal at Thoughtworks, delivered a profound session at DevoxxUK2024, dissecting the art and science of decision-making in software development. Drawing from his experience as a consultant and his work on a forthcoming book about software architecture, Andrew argues that decisions, both conscious and unconscious, form the backbone of software systems. His talk explores various decision-making approaches, their implications for modern, decentralized teams, and introduces the advice process as a novel framework for balancing speed, decentralization, and accountability.
The Anatomy of Decision-Making
Andrew begins by framing software architecture as the cumulative result of myriad decisions, from coding minutiae to strategic architectural choices. He introduces a refined model of decision-making comprising three stages: option making, decision taking, and decision sharing. Option making involves generating possible solutions, drawing on patterns, stakeholder needs, and past experiences. Decision taking, often the most scrutinized phase, requires selecting one option, inherently rejecting others, which Andrew describes as a “wicked problem” due to its complexity and lack of a perfect solution. Decision sharing ensures effective communication to implementers, a step frequently fumbled when architects and developers are disconnected.
Centralized Decision-Making Approaches
Andrew outlines three centralized decision-making models: autocratic, delegated, and consultative. In the autocratic approach, a single individual—often a chief architect—handles all stages, enabling rapid decisions but risking bottlenecks and poor sharing. Delegation involves the autocrat assigning decision-making to others, potentially improving outcomes by leveraging specialized expertise, though it remains centralized. The consultative approach sees the decision-maker seeking input from others but retaining ultimate authority, which can enhance decision quality but slows the process. Andrew emphasizes that while these methods can be swift, they concentrate power, limiting scalability in large organizations.
Decentralized Decision-Making Models
Transitioning to decentralized approaches, Andrew discusses consent, democratic, and consensus models. The consent model allows a single decision-maker to propose options, subject to veto by affected parties, shifting some power outward but risking gridlock. The democratic model, akin to Athenian direct democracy, involves voting on options, reducing the veto power of individuals but potentially marginalizing minority concerns. Consensus seeks universal agreement, maximizing inclusion but often stalling due to the pursuit of perfection. Andrew notes that decentralized models distribute power more widely, enhancing collaboration but sacrificing speed, particularly in consensus-driven processes.
The Advice Process: A Balanced Approach
To address the trade-offs between speed and decentralization, Andrew introduces the advice process, a framework where anyone can initiate and make decisions, provided they seek advice from affected parties and experts. Unlike permission, advice is non-binding, preserving the decision-maker’s autonomy while fostering trust and collaboration. This approach aligns with modern autonomous teams, allowing decisions to emerge organically without relying on a fixed authority. Andrew cites the Open Agile Architecture Framework, which supports this model by emphasizing documented accountability, such as through Architecture Decision Records (ADRs). The advice process minimizes unnecessary sharing, ensuring efficiency while empowering teams.
Navigating Power and Accountability
A recurring theme in Andrew’s talk is the distribution of power and accountability. He challenges the assumption that a single individual must always be accountable, advocating for a culture where teams can initiate decisions relevant to their context. By involving the right people at the right time, the advice process mitigates risks associated with uninformed decisions while avoiding the bottlenecks of centralized models. Andrew’s narrative underscores the need for explicit decision-making processes, encouraging organizations to cultivate trust and transparency to navigate the complexities of modern software development.
Links:
[PHPForumParis2023] Closing Keynote – AFUP Team
The AFUP Team closed Forum PHP 2023 with a heartfelt keynote, celebrating the event’s success and the PHP community’s vibrant spirit. Held at the Hotel New York – The Art of Marvel in Disneyland Paris, the keynote acknowledged the contributions of sponsors, speakers, and organizers. Highlighting upcoming initiatives like AFUP Day 2024, the team invited developers to engage with local meetups and contribute to the community, reinforcing the collaborative ethos that defines PHP events.
Celebrating Community and Collaboration
The AFUP Team expressed gratitude to the 29 sponsors, including major supporters like Yousign and Clever Cloud, whose backing made the event possible. They also celebrated the speakers who shared their expertise, fostering knowledge exchange across skill levels. The keynote emphasized the community’s welcoming atmosphere, with attendees praising the event’s diversity of topics, from beginner-friendly talks to advanced technical sessions. This inclusivity, the team noted, strengthens the PHP ecosystem.
Looking Ahead to AFUP Day 2024
Looking forward, the AFUP Team announced the sixth edition of AFUP Day, set for May 24, 2024, in Lille, Lyon, Nancy, and Poitiers. With tickets already available and a call for papers open until November 13, they encouraged both seasoned and new speakers to participate. The team highlighted their mentorship programs to support first-time presenters, ensuring diverse voices are heard. This vision for community-driven events underscored their commitment to accessibility and growth.
Links:
[SpringIO2024] Mind the Gap: Connecting High-Performance Systems at a Leading Crypto Exchange @ Spring I/O 2024
At Spring I/O 2024, Marcos Maia and Lars Werkman from Bitvavo, Europe’s leading cryptocurrency exchange, unveiled the architectural intricacies of their high-performance trading platform. Based in the Netherlands, Bitvavo processes thousands of transactions per second with sub-millisecond latency. Marcos and Lars detailed how they integrate ultra-low-latency systems with Spring Boot applications, offering a deep dive into their strategies for scalability and performance. Their talk, rich with technical insights, challenged conventional software practices, urging developers to rethink performance optimization.
Architecting for Ultra-Low Latency
Marcos opened by highlighting Bitvavo’s mission to enable seamless crypto trading for nearly two million customers. The exchange’s hot path, where orders are processed, demands microsecond response times. To achieve this, Bitvavo employs the Aeron framework, an open-source tool designed for high-performance messaging. By using memory-mapped files, UDP-based communication, and lock-free algorithms, the platform minimizes latency. Marcos explained how they bypass traditional databases, opting for in-memory processing with eventual disk synchronization, ensuring deterministic outcomes critical for trading fairness.
Optimizing the Hot Path
The hot path’s design is uncompromising, as Marcos elaborated. Bitvavo avoids garbage collection by preallocating and reusing objects, ensuring predictable memory usage. Single-threaded processing, counterintuitive to many, leverages CPU caches for nanosecond-level performance. The platform uses distributed state machines, guaranteeing consistent outputs across executions. Lars complemented this by discussing inter-process communication via shared memory and DPDK for kernel-bypassing network operations. These techniques, rooted in decades of trading system expertise, enable Bitvavo to handle peak loads of 30,000 transactions per second.
Bridging with Spring Boot
Integrating high-performance systems with the broader organization poses significant challenges. Marcos detailed the “cold sink,” a Spring Boot application that consumes data from the hot path’s Aeron archive, feeding it into Kafka and MySQL for downstream processing. By batching requests and using object pools, the cold sink minimizes garbage collection, maintaining performance under heavy loads. Fine-tuning batch sizes and applying backpressure ensure the system keeps pace with the hot path’s output, preventing data lags in Bitvavo’s 24/7 operations.
Enhancing JWT Signing Performance
Lars concluded with a case study on optimizing JWT token signing, a “warm path” process targeting sub-millisecond latency. Initially, their RSA-based signing took 8.8 milliseconds, far from the goal. By switching to symmetric HMAC signing and adopting Azul Prime’s JVM, they achieved a 30x performance boost, reaching 260-280 microsecond response times. Lars emphasized the importance of benchmarking with JMH and leveraging Azul’s features like Falcon JIT compiler for stable throughput. This optimization underscores Bitvavo’s commitment to performance across all system layers.
Links:
[DefCon32] OH MY DC: Abusing OIDC All the Way to Your Cloud
As organizations migrate from static credentials to dynamic authentication protocols, overlooked intricacies in implementations create fertile ground for exploitation. Aviad Hahami, a security researcher at Palo Alto Networks, demystifies OpenID Connect (OIDC) in the context of continuous integration and deployment (CI/CD) workflows. His examination reveals vulnerabilities stemming from under-configurations and misconfigurations, enabling unauthorized access to cloud environments. By alternating perspectives among users, identity providers, and CI vendors, Aviad illustrates attack vectors that compromise sensitive resources.
Aviad begins with foundational concepts, clarifying OIDC’s role in secure, short-lived token exchanges. In CI/CD scenarios, tools like GitHub Actions request tokens from identity providers (IdPs) such as GitHub’s OIDC provider. These tokens, containing claims like repository names and commit SHAs, are validated by workload identity federations (WIFs) in clouds like AWS or Azure. Proper configuration ensures tokens originate from trusted sources, but lapses invite abuse.
Common pitfalls include wildcard allowances in policies, permitting access from unintended repositories. Aviad demonstrates how fork pull requests (PRs) exploit these, granting cloud roles without maintainer approval. Such “no configs” scenarios, where minimal effort yields high rewards, underscore the need for precise claim validations.
Advanced Configurations and Misconfigurations
Delving deeper, Aviad explores “advanced configs” that inadvertently become misconfigurations. Features like GitHub’s ID token requests for forks introduce risks if not explicitly enabled. He recounts discovering a vulnerability in CircleCI, where reusable configurations allowed token issuance to forks, bypassing protections.
Shifting to the IdP viewpoint, Aviad discloses a real-world flaw in a popular CI vendor, permitting token claims from any repository within an organization. This enabled cross-project escalations, compromising clouds via simple PRs. Reported responsibly, the issue prompted fixes, emphasizing the cascading effects of IdP errors.
He references Tinder’s research on similar WIF misconfigurations, reinforcing that even sophisticated setups falter without rigorous claim scrutiny.
Exploitation Through CI Vendors
Aviad pivots to CI vendor responsibilities, highlighting how their token issuance logic influences downstream security. In CircleCI’s case, a bug allowed organization-wide token claims, exposing multiple projects. By requesting tokens in fork contexts, attackers could satisfy broad WIF conditions, accessing clouds undetected.
Remediation involved opt-in mechanisms for fork tokens, mirroring GitHub’s approach. Aviad stresses learning claim origins per IdP, avoiding wildcards, and hardening pipelines to prevent trivial breaches.
His tool for auditing Azure CLI configurations exemplifies proactive defense, aiding in identifying exposed resources.
Broader Implications for Secure Authentication
Aviad’s insights extend beyond CI/CD, advocating holistic OIDC understanding to thwart supply chain attacks. By dissecting entity interactions—users, IdPs, and clouds—he equips practitioners to craft resilient policies.
Encouraging bounty hunters to probe these vectors, he underscores OIDC’s maturity yet persistent gaps. Ultimately, robust configurations transform OIDC from vulnerability to asset, safeguarding digital infrastructures.
Links:
[DevoxxGR2024] Meet Your New AI Best Friend: LangChain at Devoxx Greece 2024 by Henry Lagarde
At Devoxx Greece 2024, Henry Lagarde, a senior software engineer at Criteo, introduced audiences to LangChain, a versatile framework for building AI-powered applications. With infectious enthusiasm and live demonstrations, Henry showcased how LangChain simplifies interactions with large language models (LLMs), enabling developers to create context-aware, reasoning-driven tools. His talk, rooted in his experience at Criteo, a leader in retargeting and retail media, highlighted LangChain’s composability and community-driven evolution, offering a practical guide for AI integration.
LangChain’s Ecosystem and Composability
Henry began by defining LangChain as a framework for building context-aware reasoning applications. Unlike traditional LLM integrations, LangChain provides modular components—prompt templates, LLM abstractions, vector stores, text splitters, and document loaders—that integrate with external services rather than hosting them. This composability allows developers to switch LLMs seamlessly, adapting to changes in cost or performance without rewriting code. Henry emphasized LangChain’s open-source roots, launched in late 2022, and its rapid growth, with versions in Python, TypeScript, Java, and more, earning it the 2023 New Tool of the Year award.
The ecosystem extends beyond core modules to include LangServe for REST API deployment, LangSmith for monitoring, and a community hub for sharing prompts and agents. This holistic approach supports developers from prototyping to production, making LangChain a cornerstone for AI engineering.
Building a Chat Application
In a live demo, Henry showcased LangChain’s simplicity by recreating a ChatGPT-like application in under 10 lines of Python code. He instantiated an OpenAI client using GPT-3.5 Turbo, implemented chat history for context awareness, and used prompt templates to define system and human messages. By combining these components, he enabled streaming responses, mimicking ChatGPT’s real-time output without the $20 monthly subscription. This demonstration highlighted LangChain’s ability to handle memory, input/output formatting, and LLM interactions with minimal effort, empowering developers to build cost-effective alternatives.
Henry noted that LangChain’s abstractions, such as strong typing and output parsing, eliminate manual prompt engineering, ensuring robust integrations even when APIs change. The demo underscored the framework’s accessibility, inviting developers to experiment with its capabilities.
Creating an AI Agent for PowerPoint Generation
Henry’s second demo illustrated LangChain’s advanced features by building an AI agent to generate PowerPoint presentations. Using TypeScript, he configured a system prompt from LangSmith’s community hub, defining the agent’s tasks: researching a topic via the Serper API and generating a structured PowerPoint. He defined tools with Zod for runtime type checking, ensuring consistent outputs, and integrated callbacks for UI tracing and monitoring.
The agent, powered by Anthropic’s Claude model, performed internet research on Google Cloud, compiled findings, and generated a presentation with sourced information. Despite minor delays, the demo showcased LangChain’s ability to orchestrate complex workflows, combining research, data processing, and content creation. Henry’s use of LangSmith for prompt optimization and monitoring highlighted the framework’s production-ready capabilities.
Community and Cautions
Henry emphasized LangChain’s vibrant community, which drives its multi-language support and rapid evolution. He encouraged attendees to contribute, noting the framework’s open-source ethos and resources like GitHub for further exploration. However, he cautioned against over-reliance on LLMs, citing their occasional laziness or errors, as seen in ChatGPT’s simplistic responses. LangChain, he argued, augments developer workflows but requires careful integration to ensure reliability in production environments.
His vision for LangChain is one of empowerment, enabling developers to enhance applications incrementally while maintaining control over AI-driven processes. By sharing his demo code on GitHub, Henry invited attendees to experiment and contribute to LangChain’s growth.
Conclusion
Henry’s presentation at Devoxx Greece 2024 was a compelling introduction to LangChain’s potential. Through practical demos and insightful commentary, he demonstrated how the framework simplifies AI development, from basic chat applications to sophisticated agents. His emphasis on composability, community, and cautious integration resonated with developers eager to explore AI. As LangChain continues to evolve, Henry’s talk serves as a blueprint for harnessing its capabilities in real-world applications.
Links:
[KCD UK 2024] Deep Dive into Kubernetes Runtime Security
Saeid Bostandoust, founder of CubeDemi.io, delivered an in-depth presentation at KCDUK2024 on Kubernetes runtime security, focusing on tools and techniques to secure containers during execution. As a CNCF project contributor, Saeid explored Linux security features like Linux Capabilities, SELinux, AppArmor, Seccomp-BPF, and KubeArmor, providing a comprehensive overview of how these can be applied in Kubernetes to mitigate zero-day attacks and enforce policies. His talk emphasized practical implementation, observability, and policy enforcement, aligning with KCDUK2024’s focus on securing cloud-native environments.
Understanding Runtime Security
Saeid defined runtime security as protecting applications during execution, contrasting it with pre-runtime measures like static code analysis. Runtime security focuses on mitigating zero-day attacks and malicious behavior through real-time intrusion detection, process isolation, policy enforcement, and monitoring. Linux offers over 30 security mechanisms, including SELinux, AppArmor, Linux Capabilities, Seccomp-BPF, and namespaces, alongside kernel drivers to counter threats like Meltdown and Spectre. Saeid focused on well-known features, explaining their roles and Kubernetes integration.
Linux Capabilities: Historically, processes were either privileged (with full root permissions) or unprivileged, leading to vulnerabilities like privilege escalation via commands like ping. Linux Capabilities, introduced to granularly assign permissions, allow processes to perform specific actions (e.g., opening raw sockets for ping) without full root privileges. In Kubernetes, capabilities can be configured in pod manifests to drop unnecessary permissions, enhancing security even for root-run containers.
Seccomp-BPF: Seccomp (Secure Computing) restricts system calls a process can make. Originally limited to basic calls (read, write, exit), Seccomp-BPF extends this with customizable profiles. In Kubernetes, a Seccomp-BPF profile can be defined in a JSON file and applied via a pod’s security context, terminating processes that attempt unauthorized system calls. Saeid demonstrated a restrictive profile that limits a container to basic operations, preventing it from running if additional system calls are needed.
LSM Modules (AppArmor, SELinux, BPF-LSM): Linux Security Modules (LSM) provide hooks to intercept operations, such as file access or network communication. AppArmor uses path-based profiles, while SELinux employs label-based policies. BPF-LSM, a newer option, allows dynamic policy injection via eBPF code, offering flexibility without requiring application restarts. Saeid noted that BPF-LSM, available in kernel versions 5.7 and later, supports stacking with other LSMs, enhancing Kubernetes security.
KubeArmor: Simplifying Policy Enforcement
KubeArmor, a CNCF project, simplifies runtime security by allowing users to define policies via Kubernetes Custom Resource Definitions (CRDs). These policies are translated into AppArmor, SELinux, or BPF-LSM profiles, depending on the worker node’s LSM. KubeArmor addresses the challenge of syncing profiles across cluster nodes, automating deployment and updates. It uses eBPF for observability, monitoring system calls and generating telemetry for tools like Prometheus and Elasticsearch. Saeid showcased KubeArmor’s architecture, including a daemon set with an init container for compiling eBPF code and a relay server for aggregating logs and alerts.
An example policy demonstrated KubeArmor denying access to sensitive files (e.g., /etc/passwd, /etc/shadow) and commands (e.g., apt, apt-get), with logs showing enforcement details. Unlike manual AppArmor or SELinux profiles, which are complex and hard to scale, KubeArmor’s declarative approach and default deny policies simplify securing containers, preventing access to dangerous assets like /proc mounts.
Practical Implementation and Community Engagement
Saeid provided practical examples, such as configuring a pod to drop all capabilities except those needed, applying a Seccomp-BPF profile to restrict system calls, and using KubeArmor to enforce file access policies. He highlighted KubeArmor’s integration with tools like OPA Gatekeeper to block unauthorized commands (e.g., kubectl exec) when BPF-LSM is unavailable. For further learning, Saeid offered a 50% discount on CubeDemi.io’s container security workshop, encouraging KCDUK2024 attendees to deepen their Kubernetes security expertise.
Links:
Hashtags: #Kubernetes #RuntimeSecurity #KubeArmor #CNCF #LinuxSecurity #SaeidBostandoust #KCDUK2024
[DefCon32] On Your Ocean’s 11 Team, I’m the AI Guy (Technically Girl)
Blending the allure of high-stakes gambles with cutting-edge threats, Harriet Farlow, an AI security specialist, navigates the intersection of adversarial machine learning and casino operations. Targeting Canberra Casino, she exposes frailties in emerging AI integrations for surveillance and monitoring. Her exploits disrupt facial recognition, evade detection, and manipulate gameplay, illustrating broader perils in sectors reliant on such systems.
Harriet’s background spans physics, data science, and government intelligence, culminating in founding ML Security Labs. Her focus: deceiving AI to reveal weaknesses, akin to cyber intrusions but tailored to models’ statistical natures.
Casinos, epitomizing surveillance-heavy environments, increasingly adopt AI for identifying threats and optimizing play. Canberra, though modest, mirrors global trends where a few providers dominate, ripe for widespread impacts.
Adversarial attacks perturb inputs subtly, fooling models without human notice. Harriet employs techniques like fast gradient sign methods, crafting perturbations that reduce classification confidence.
Targeting Facial Recognition
Facial systems, crucial for barring excluded patrons, succumb to perturbations. Harriet generates adversarial examples via libraries like Foolbox, adding noise that misclassifies faces.
Tests show 40.4% success in evading matches, but practical adaptations ensure consistent bypasses. This equates to denial-of-service equivalents in AI, disrupting reliability.
Broader implications span medical diagnostics to autonomous navigation, where minor alterations yield catastrophic errors.
Evading Surveillance and Gameplay Monitoring
Surveillance AI detects anomalies; Harriet’s perturbations obscure actions, mimicking wild exploits.
Gameplay AI monitors for advantages; adversarial inputs confuse chip recognition or behavior analysis, enabling undetected strategies.
Interviews with casino personnel reveal heavy reliance on human oversight, despite AI promises. Only 8% of surveyed organizations secure AI effectively, versus 94% using it.
Lessons from the Inflection Point
Casinos transition to AI amid regulatory voids, amplifying risks. Harriet advocates integrating cyber lessons: robust testing beyond accuracy, incorporating security metrics.
Her findings stress governance: people and processes remain vital, yet overlooked. As societies embrace AI surveillance, vulnerabilities threaten equity and safety.
Harriet’s work urges cross-disciplinary approaches, blending cyber expertise with AI defenses to mitigate emerging dangers.
Links:
EN_DEFCON32MainStageTalks_011_013.md
[KotlinConf2024] Hacking Sony Cameras with Kotlin
At KotlinConf2024, Rahul Ravikumar, a Google software engineer, shared his adventure reverse-engineering Sony’s Bluetooth Low Energy (BLE) protocol to build a Kotlin Multiplatform (KMP) remote camera app. Frustrated by Sony’s bloated apps—Imaging Edge Mobile (1.9 stars) and Creators’ App (3.3 stars)—Rahul crafted a lean solution for his Sony Alpha a7r Mark 5, focusing on remote control. Using Compose Multiplatform for desktop and mobile, and Sony’s C SDK via cinterop, he demonstrated how KMP enables cross-platform innovation. His live demo, clicking a photo with a single button, thrilled the audience and showcased BLE’s potential for fun and profit.
Reverse-Engineering Sony’s BLE Protocol
Rahul’s journey began with Sony’s underwhelming app ecosystem, prompting him to reverse-engineer the camera’s undocumented BLE protocol. BLE’s Generic Access Profile (GAP) handles device discovery, with the camera (peripheral) advertising its presence and the phone (central) connecting. The Generic Attribute Profile (GATT) manages commands, using 16-bit UUIDs for services like Sony’s remote control (FF01 for commands, FF02 for notifications). Unable to use Android’s HCI Snoop logs due to Sony’s Wi-Fi Direct reliance, Rahul employed a USB BLE sniffer and Wireshark to capture GATT traffic. He identified Sony’s company ID (0x02D01) and camera marker (0x03000) in advertising packets. Key operations—reset (0x0106), focus (0x0107), and capture (0x0109)—form a state machine, with notifications (e.g., 0x023F) confirming actions. This meticulous process, decoding hexadecimal payloads, enabled Rahul to control the camera programmatically.
Building a KMP Remote Camera App
With the protocol cracked, Rahul built a KMP app using Compose Multiplatform, targeting Android and desktop. The app’s BLE scanner filters for Sony’s manufacturer data (0x03000), ignoring irrelevant metadata like model codes. Connection logic uses Kotlin Flows to monitor peripheral state, ensuring seamless reconnections. Capturing a photo involves sending reset and focus commands to FF01, awaiting focus confirmation on FF02, then triggering capture and shutter reset. For advanced features, Rahul integrated Sony’s C SDK via cinterop, navigating its complexities to access functions like interval shooting. His live demo, despite an initially powered-off camera, succeeded when the camera advertised, and a button click took a photo, earning audience cheers. The app’s simplicity contrasts Sony’s feature-heavy apps, proving KMP’s power for cross-platform development. Rahul’s GitHub repository offers the code, inviting developers to explore BLE and KMP for their own projects.
Links:
Hashtags: #KotlinMultiplatform #BluetoothLE