Recent Posts
Archives

PostHeaderIcon [DevoxxBE2025] Robotics and GraalVM Native Libraries

Lecturer

Florian Enner is a co-founder and chief software engineer at HEBI Robotics, a company focused on modular robotic systems for research and industrial applications. Holding a Master’s degree in Robotics from Carnegie Mellon University, he has contributed to advancements in real-time control software and hardware integration, with publications in venues like the IEEE International Conference on Robotics and Automation.

Abstract

This article explores the application of Java in robotics development, with a particular emphasis on real-time control and the emerging role of GraalVM’s native shared libraries as a potential substitute for portions of C++ codebases. It elucidates core concepts in modular robotic hardware and software design, positioned within the framework of HEBI Robotics’ efforts to create adaptable platforms for autonomous and inspection tasks. By examining demonstrations of robotic assemblies and code compilation processes, the narrative underscores approaches to achieving platform independence, optimizing execution speed, and incorporating safety protocols. The exploration assesses environmental factors in embedded computing, ramifications for workflow efficiency and system expandability, and offers perspectives on migrating established code for greater development agility.

Innovations in Modular Robotic Components

HEBI Robotics develops interchangeable elements that serve as sophisticated foundational units for assembling tailored robotic configurations, comparable to enhanced construction sets. These encompass drive mechanisms, visual sensors, locomotion foundations, and power sources, engineered to support swift prototyping across sectors like manufacturing oversight and self-governing navigation. The breakthrough resides in the drive units’ unified architecture, merging propulsion elements, position detectors, and regulation circuits into streamlined modules that permit sequential linking, thereby minimizing cabling demands and mitigating intricacies in articulated assemblies.

Situated within the broader landscape of robotics, this methodology counters the segmentation where pre-built options frequently fall short of bespoke requirements. Through standardized yet modifiable parts, HEBI promotes innovation in academic and commercial settings, allowing practitioners to prioritize advanced algorithms over fundamental assembly. For example, drive units facilitate instantaneous regulation at 1 kHz frequencies, incorporating adjustments for voltage fluctuations in portable energy scenarios and protective expirations to avert erratic operations.

Procedurally, the computational framework spans multiple programming environments, accommodating dialects such as Java, C++, Python, and MATLAB to broaden usability. Illustrations depict mechanisms like multi-legged walkers or wheeled units directed through wired or wireless connections, underscoring the arrangement’s durability in practical deployments. Ramifications involve diminished entry thresholds for exploratory groups, fostering accelerated cycles of refinement and more secure implementations, especially for novices or instructional purposes.

Java’s Application in Instantaneous Robotic Regulation

The deployment of Java in robotics contests traditional assumptions regarding its fitness for temporally stringent duties, which have historically been the domain of more direct languages. At HEBI, Java drives regulatory cycles on integrated Linux platforms, capitalizing on its comprehensive toolkit for output while attaining consistent timing. Central to this is the administration of memory reclamation interruptions via meticulous distribution tactics and localized variables for thread-specific information.

The interface conceals equipment details, enabling Java applications to dispatch directives and obtain responses promptly. An elementary Java routine can coordinate a mechanical limb’s motions:

import com.hebi.robotics.*;

public class LimbRegulation {
    public static void main(String[] args) {
        ModuleSet components = ModuleSet.fromDetection("limb");
        Group assembly = components.formAssembly();
        Directive dir = Directive.generate();
        dir.assignPlacement(new double[]{0.0, Math.PI/2, 0.0}); // Define joint placements
        assembly.transmitDirective(dir);
    }
}

This script identifies components, organizes a regulatory cluster, and issues placement directives. Protective measures incorporate through directive durations: absence of renewals within designated intervals (e.g., 100 ms) initiates shutdowns, safeguarding against risks.

Placed in perspective, this strategy diverges from C++’s prevalence in constrained devices, providing Java’s strengths in clarity and swift iteration. Examination indicates Java equaling C++ delays in regulatory sequences, with slight burdens alleviated by enhancements like preemptive assembly. Ramifications extend to group formation: Java’s accessibility draws varied expertise, hastening initiative timelines while upholding dependability.

GraalVM’s Native Assembly for Interchangeable Modules

GraalVM’s indigenous compilation converts Java scripts into independent executables or interchangeable modules, offering a pathway to supplant efficiency-vital C++ segments. At HEBI, this is investigated for interchangeable modules, assembling Java logic into .so files invokable from C++.

The procedure entails configuring GraalVM for introspections and assets, then assembling:

native-image --shared -jar mymodule.jar -H:Name=mymodule

This yields an interchangeable module with JNI exports. A basic illustration assembles a Java category with functions revealed for C++ calls:

public class Conference {
    public static int sum(int first, int second) {
        return first + second;
    }
}

Assembled into libconference.so, it is callable from C++. Illustrations confirm successful runs, with “Greetings Conference” output from Java-derived logic.

Situated within robotics’ demand for minimal-delay modules, this connects dialects, permitting Java for reasoning and C++ for boundaries. Efficiency assessments show near-indigenous velocities, with launch benefits over virtual machines. Ramifications involve streamlined upkeep: Java’s protective attributes diminish flaws in regulations, while indigenous assembly guarantees harmony with current C++ frameworks.

Efficiency Examination and Practical Illustrations

Efficiency benchmarks contrast GraalVM modules to C++ counterparts: in sequences, delays are equivalent, with Java’s reclamation regulated for predictability. Practical illustrations encompass serpentine overseers traversing conduits, regulated through Java for trajectory planning.

Examination discloses GraalVM’s promise in constrained contexts, where rapid assemblies (under 5 minutes for minor modules) permit swift refinements. Protective attributes, like speed restrictions, merge effortlessly.

Ramifications: blended codebases capitalize on advantages, boosting expandability for intricate mechanisms like equilibrium platforms involving users.

Prospective Paths in Robotic Computational Frameworks

GraalVM vows additional mergers, such as multilingual modules for fluid multi-dialect calls. HEBI foresees complete Java regulations, lessening C++ dependence for superior output.

Obstacles: guaranteeing temporal assurances in assembled scripts. Prospective: wider integrations in robotic structures.

In overview, GraalVM empowers Java in robotics, fusing proficiency with creator-oriented instruments for novel arrangements.

Links:

  • Lecture video: https://www.youtube.com/watch?v=md2JFgegN7U
  • Florian Enner on LinkedIn: https://www.linkedin.com/in/florian-enner-59b81466/
  • Florian Enner on GitHub: https://github.com/ennerf
  • HEBI Robotics website: https://www.hebirobotics.com/

PostHeaderIcon [MunchenJUG] Reliability in Enterprise Software: A Critical Analysis of Automated Testing in Spring Boot Ecosystems (27/Oct/2025)

Lecturer

Philip Riecks is an independent software consultant and educator specializing in Java, Spring Boot, and cloud-native architectures. With over seven years of professional experience in the software industry, Philip has established himself as a prominent voice in the Java ecosystem through his platform, Testing Java Applications Made Simple. He is a co-author of the influential technical book Stratospheric: From Zero to Production with Spring Boot and AWS, which bridges the gap between local development and production-ready cloud deployments. In addition to his consulting work, he produces extensive educational content via his blog and YouTube channel, focusing on demystifying complex testing patterns for enterprise developers.

Abstract

In the contemporary landscape of rapid software delivery, automated testing serves as the primary safeguard for application reliability and maintainability. This article explores the methodologies for demystifying testing within the Spring Boot framework, moving beyond superficial unit tests toward a comprehensive strategy that encompasses integration and slice testing. By analyzing the “Developer’s Dilemma”—the friction between speed of delivery and the confidence provided by a robust test suite—this analysis identifies key innovations such as the “Testing Pyramid” and specialized Spring Boot test slices. The discussion further examines the technical implications of external dependency management through tools like Testcontainers and WireMock, advocating for a holistic approach that treats test code with the same rigor as production logic.

The Paradigm Shift in Testing Methodology

Traditional software development often relegated testing to a secondary phase, frequently outsourced to separate quality assurance departments. However, the rise of DevOps and continuous integration has necessitated a shift toward “test-driven” or “test-enabled” development. Philip Riecks identifies that the primary challenge for developers is not the lack of tools, but the lack of a clear strategy. Testing is often perceived as a bottleneck rather than an accelerator.

The methodology proposed focuses on the Testing Pyramid, which prioritizes a high volume of fast, isolated unit tests at the base, followed by a smaller number of integration tests, and a minimal set of end-to-end (E2E) tests at the apex. The innovation in Spring Boot testing lies in its ability to provide “Slice Testing,” allowing developers to load only specific parts of the application context (e.g., the web layer or the data access layer) rather than the entire infrastructure. This approach significantly reduces test execution time while maintaining high fidelity.

Architectural Slicing and Context Management

One of the most powerful features of the Spring Boot ecosystem is its refined support for slice testing via annotations. This allows for an analytical approach to testing where the scope of the test is strictly defined by the architectural layer under scrutiny.

  1. Web Layer Testing: Using @WebMvcTest, developers can test REST controllers without launching a full HTTP server. This slice provides a mocked environment where the web infrastructure is active, but business services are replaced by mocks (e.g., using @MockBean).
  2. Data Access Testing: The @DataJpaTest annotation provides a specialized environment for testing JPA repositories. It typically uses an in-memory database by default, ensuring that database interactions are verified without the overhead of a production-grade database.
  3. JSON Serialization: @JsonTest isolates the serialization and deserialization logic, ensuring that data structures correctly map to their JSON representations.

This granular control prevents “Context Bloat,” where tests become slow and brittle due to the unnecessary loading of the entire application environment.

Code Sample: A Specialized Controller Test Slice

@WebMvcTest(UserRegistrationController.class)
class UserRegistrationControllerTest {

    @Autowired
    private MockMvc mockMvc;

    @MockBean
    private UserRegistrationService registrationService;

    @Test
    void shouldRegisterUserSuccessfully() throws Exception {
        mockMvc.perform(post("/api/users")
                .contentType(MediaType.APPLICATION_JSON)
                .content("{\"username\": \"priecks\", \"email\": \"philip@example.com\"}"))
                .andExpect(status().isCreated());
    }
}

Managing External Dependencies: Testcontainers and WireMock

A significant hurdle in integration testing is the reliance on external systems such as databases, message brokers, or third-party APIs. Philip emphasizes the move away from “In-Memory” databases (like H2) for testing production-grade applications, citing the risk of “Environment Parity” issues where H2 behaves differently than a production PostgreSQL instance.

The integration of Testcontainers allows developers to spin up actual Docker instances of their production infrastructure during the test lifecycle. This ensures that the code is tested against the exact same database engine used in production. Similarly, WireMock is utilized to simulate external HTTP APIs, allowing for the verification of fault-tolerance mechanisms like retries and circuit breakers without depending on the availability of the actual external service.

Consequences of Testing on Long-term Maintainability

The implications of a robust testing strategy extend far beyond immediate bug detection. A well-tested codebase enables fearless refactoring. When developers have a “safety net” of automated tests, they can update dependencies, optimize algorithms, or redesign components with the confidence that existing functionality remains intact.

Furthermore, Philip argues that the responsibility for quality must lie with the engineer who writes the code. In an “On-Call” culture, the developer who builds the system also runs it. This ownership model, supported by automated testing, transforms software engineering from a process of “handing over” code to one of “carefully crafting” resilient systems.

Conclusion

Demystifying Spring Boot testing requires a transition from viewing tests as a chore to seeing them as a fundamental engineering discipline. By leveraging architectural slices, managing dependencies with Testcontainers, and adhering to the Testing Pyramid, developers can build applications that are not only functional but also sustainable. The ultimate goal is to reach a state where testing provides joy through the confidence it instills, ensuring that the software remains a robust asset for the enterprise rather than a source of technical debt.

Links:

PostHeaderIcon [DotJs2025] Recreating Windows Media Player Art with Web MIDI API

Nostalgia’s neon flickers in forgotten folders—Windows Media Player’s pulsating palettes, sound’s synesthetic surge—yet Web MIDI API resurrects this reverie, fusing firmware’s fidelity with canvas’s caprice. Vadim Smirnov, developer advocate at CKEditor, resurrected this relic at dotJS 2025, scripting synths to spawn spectra from scales. A code crafter craving quirks, Vadim vivified browser APIs’ bounty: from Gamepad’s grips to MIDI’s melodies, an arsenal arcane yet accessible.

Vadim’s voyage ventured MDN’s meadow: APIs’ alphabet aglow—A to C’s cornucopia, from Ambient Light’s auras to Credential Management’s cloaks. Web MIDI’s majesty: hardware handshakes, note’s ingress igniting visuals—synthesizers’ sonatas, sequencers’ scrolls. Vadim’s vignette: MIDI’s muster (navigator.requestMIDIAccess()), inputs’ influx, notes’ notation—velocity’s vibrancy vectoring hues, pitch’s palette permuting particles.

Canvas’s choreography: particles’ proliferation, hues’ harmony—Web Audio’s waveforms weaving waves, MIDI’s messages modulating mods. Vadim’s virtuoso: keyboard kludges for synthless souls, repos’ riches for replication—exploration’s exhortation, weirdness’s warrant.

This resurrection: APIs’ allure, physicality’s portal—code’s cadence commanding keys.

APIs’ Abundant Arsenal

Vadim vaulted MDN’s vault: A-C’s array—Gamepad’s grasp to MIDI’s muse. Web MIDI’s weave: access’s appeal, notes’ nuance—velocity’s vector, pitch’s prism.

Synths’ Spectra and Sketches

Canvas’s cascade: particles’ pulse, hues’ hymn—Audio’s arcs amplifying MIDI’s motifs. Vadim’s vow: keyboards’ kinship, repos’ revelation—weird’s wisdom.

Links:

PostHeaderIcon [RivieraDev2025] Moustapha Agack – One Pixel at a Time: Running DOOM on an E-Reader

Moustapha Agack regaled the Riviera DEV 2025 crowd with a tale of audacious tinkering in his session, chronicling his quest to resurrect the iconic DOOM on a humble Kindle e-reader. Lacking embedded systems expertise, Moustapha embarked on this odyssey driven by whimsy and challenge, transforming a 25-euro thrift find into a retro gaming relic. His narrative wove through hardware idiosyncrasies, software sorcery, and triumphant playback, celebrating the open-source ethos that fuels such feats.

The Allure of DOOM: A Porting Phenomenon

Moustapha kicked off by immersing attendees in DOOM’s lore, the 1993 id Software opus that pioneered first-person shooters with its labyrinthine levels and demonic foes. Its source code, liberated in 1997 under GPL, has spawned thousands of ports—from pregnancy tests to analytics dashboards—cementing its status as internet folklore. Moustapha quipped about the “Run DOOM on Reddit” subreddit, where biweekly posts chronicle absurd adaptations, like voice-powered variants or alien hardware hypotheticals.

The game’s appeal lies in its modular C codebase: clean patterns, hardware abstraction layers, and raycasting renderer make it portable gold. Moustapha praised its elegance—efficient collision detection, binary space partitioning—contrasting his novice TypeScript background with the raw C grit. This disparity fueled his motivation: prove that curiosity trumps credentials in maker pursuits.

Decoding the Kindle: E-Ink Enigmas

Shifting to hardware, Moustapha dissected the Kindle 4 (2010 model), a $25 Boncoin bargain boasting 500,000 pixels of e-ink wizardry. Unlike LCDs, e-ink mimics paper via electrophoretic microspheres—black-and-white beads in oil, manipulated by electric fields for grayscale shades. He likened pixels to microscopic disco balls: charged fields flip beads, yielding 16-level grays but demanding full refreshes to banish “ghosting” artifacts.

The ARM9 processor (532 MHz), 256MB RAM, and Linux kernel (2.6.31) promise viability, yet jailbreaking—via USB exploits—unlocks framebuffer access for custom rendering. Moustapha detailed framebuffer mechanics: direct memory writes trigger screen updates, but e-ink’s sluggish 500ms latency and power draw necessitate optimizations like partial refreshes. His setup bypassed Amazon’s sandbox, installing a minimal environment sans GUI, priming the device for DOOM’s pixel-pushing demands.

Cross-Compilation Conundrums and Code Conjuring

The crux lay in bridging architectures: compiling DOOM’s x86-centric code for ARM. Moustapha chronicled toolchain tribulations—Dockerized GCC cross-compilers, dependency hunts yielding bloated binaries. He opted for Chocolate Doom, a faithful source port, stripping extraneous features for e-ink austerity: monochrome palettes, scaled resolutions (400×600 to 320×240), and throttled framerates (1-2 FPS) to sync with refresh cycles.

Input mapping proved fiendish: no joystick meant keyboard emulation via five tactile buttons, scripted in Lua for directional strafing. Rendering tweaks—dithered grayscale conversion, waveform controls for ghost mitigation—ensured legibility. Moustapha shared war stories: endless iterations debugging endianness mismatches, memory overflows, and linker woes, underscoring embedded development’s unforgiving precision.

Triumph and Takeaways: Pixels in Motion

Victory arrived with a live demo: DOOM’s corridors flickering on e-ink, demons dispatched amid deliberate blips. Moustapha beamed at this personal milestone—a 2000s internet kid etching his port into legenddom. He open-sourced everything: binaries, scripts, slides via Slidev (Markdown-JS hybrid for interactive decks), inviting Kindlers to replicate.

Reflections abounded: e-ink’s constraints honed creativity, cross-compilation demystified low-level ops, and DOOM’s legacy affirmed open-source’s democratizing force. Moustapha urged aspiring hackers: embrace imperfection, iterate relentlessly, and revel in absurdity. His odyssey reminds that innovation blooms in unlikely crucibles—one pixel, one port at a time.

Links:

PostHeaderIcon [GoogleIO2024] Tune and Deploy Gemini with Vertex AI and Ground with Cloud Databases: Building AI Applications

Vertex AI offers a comprehensive lifecycle for Gemini models, enabling customization and deployment. Ivan Nardini and Bala Narasimhan demonstrated fine-tuning, evaluation, and grounding techniques, using a media company scenario to illustrate practical applications.

Addressing Business Challenges with AI Solutions

Ivan framed the discussion around Symol Media’s issues: rising churn rates, declining engagement, and dropping satisfaction scores. Analysis revealed users spending under a minute on articles, signaling navigation and content quality problems.

The proposed AI-driven revamp personalizes the website, recommending articles based on preferences. This leverages Gemini Pro on Vertex AI, fine-tuned with company data for tailored summaries and suggestions.

Bala explained the architecture, integrating Cloud SQL for PostgreSQL with vector embeddings for semantic search, ensuring relevant content delivery.

Fine-Tuning and Deployment on Vertex AI

Ivan detailed supervised fine-tuning (SFT) on Vertex AI, using datasets of article summaries to adapt Gemini. This process, accessible via console or APIs, involves parameter-efficient tuning for cost-effectiveness.

Deployment creates scalable endpoints, with monitoring ensuring performance. Evaluation compares models using metrics like ROUGE, validating improvements.

These steps, available since 2024, enable production-ready AI with minimal infrastructure management.

Grounding with Cloud Databases for Accuracy

Bala focused on retrieval-augmented generation (RAG) using Cloud SQL’s vector capabilities. Embeddings from articles are stored and queried semantically, grounding responses in factual data to reduce hallucinations.

The jumpstart solution deploys this stack easily, with observability tools monitoring query performance and cache usage.

Launched in 2024, this integration supports production gen AI apps with robust data handling.

Observability and Future Enhancements

The demo showcased insights for query optimization, including execution plans and user metrics. Future plans include expanded vector support across Google Cloud databases.

This holistic approach empowers developers to build trustworthy AI solutions.

Links:

PostHeaderIcon [NDCMelbourne2025] TDD & DDD from the Ground Up – Chris Simon

Chris Simon, a seasoned developer and co-organizer of Domain-Driven Design Australia, presents a compelling live-coding session at NDC Melbourne 2025, demonstrating how Test-Driven Development (TDD) and Domain-Driven Design (DDD) can create maintainable, scalable software. Through a university enrollment system example, Chris illustrates how TDD’s iterative red-green-refactor cycle and DDD’s focus on ubiquitous language and domain modeling can evolve a simple CRUD application into a robust solution. His approach highlights the power of combining these methodologies to adapt to changing requirements without compromising code quality.

Starting with TDD: The Red-Green-Refactor Cycle

Chris kicks off by introducing TDD’s core phases: writing a failing test (red), making it pass with minimal code (green), and refactoring to improve structure. Using a .NET-based university enrollment system, he begins with a basic test to register a student, ensuring a created status response. Each step is deliberately small, balancing test and implementation to minimize risk. This disciplined approach, Chris explains, builds a safety net of tests, allowing confident code evolution as complexity increases.

Incorporating DDD: Ubiquitous Language and Domain Logic

As the system grows, Chris introduces DDD principles, particularly the concept of ubiquitous language. He renames methods to reflect business intent, such as “register” instead of “create” for students, and uses a static factory method to encapsulate logic. His IDE extension, Contextive, further supports this by providing domain term definitions across languages, ensuring consistency. By moving validation logic, like checking room availability, into domain models, Chris ensures business rules are encapsulated, reducing controller complexity and enhancing maintainability.

Handling Complexity: Refactoring for Scalability

As requirements evolve, such as preventing course over-enrollment, Chris encounters a race condition in the initial implementation. He demonstrates how TDD’s tests catch this issue, allowing safe refactoring. Through event storming, he rethinks the domain model, delaying room allocation until course popularity is known. This shift, informed by domain expert collaboration, optimizes resource utilization and eliminates unnecessary constraints, showcasing DDD’s ability to align code with business needs.

Balancing Testing Strategies

Chris explores the trade-offs between API-level and unit-level testing. While API tests protect the public contract, unit tests for complex scheduling algorithms allow faster, more efficient test setup. By testing a scheduler that matches courses to rooms based on enrollment counts, he ensures robust logic without overcomplicating API tests. This strategic balance, he argues, maintains refactorability while addressing intricate business rules, a key takeaway for developers navigating complex domains.

Adapting to Change with Confidence

The session culminates in a significant refactor, removing the over-enrollment check after realizing it’s applied at the wrong stage. Chris’s tests provide the confidence to make this change, ensuring no unintended regressions. By making domain model setters private, he confirms the system adheres to DDD principles, encapsulating business logic effectively. This adaptability, driven by TDD and DDD, underscores the value of iterative development and domain collaboration in building resilient software.

Links:

PostHeaderIcon [AWSReInforce2025] AWS Network Firewall: Latest features and deployment options (NIS201-NEW)

Lecturer

Amish Shah serves as Product Manager for AWS Network Firewall, driving capabilities that simplify stateful inspection at scale. His team focuses on reducing operational complexity while maintaining granular control across VPC and Transit Gateway environments.

Abstract

The technical session introduces enhancements to AWS Network Firewall that address deployment complexity, visibility gaps, and threat defense sophistication. Through Transit Gateway integration, automated domain management, and active threat defense, it establishes patterns for consistent security policy enforcement across hybrid architectures.

Transit Gateway Integration Architecture

Native Transit Gateway attachment eliminates appliance sprawl:

VPC A → TGW → Network Firewall Endpoint → VPC B

Traffic flows symmetrically through firewall endpoints in each Availability Zone. Centralized route table management propagates 10.0.0.0/8 via firewall inspection while maintaining 172.16.0.0/12 for direct connectivity. This pattern supports:

  • 100 Gbps aggregate throughput
  • Automatic failover across AZs
  • Consistent policy application across spokes

Multiple VPC Endpoint Support

The new capability permits multiple firewall endpoints per VPC:

endpoints:
  - subnet: us-east-1a
    az: us-east-1a
  - subnet: us-east-1b
    az: us-east-1b
  - subnet: us-east-1c
    az: us-east-1c

Each endpoint maintains independent health status. Route tables direct traffic to healthy endpoints, achieving 99.999% availability. This eliminates single points of failure in multi-AZ architectures.

Automated Domain List Management

Dynamic domain lists update hourly from AWS threat intelligence:

{
  "source": "AWSManaged",
  "name": "PhishingDomains",
  "update_frequency": "3600",
  "action": "DROP"
}

Integration with Route 53 Resolver DNS Firewall enables layer 7 blocking before connection establishment. The console provides visibility into list versions, rule hits, and update timestamps.

Active Threat Defense with Managed Rules

The new managed rule group consumes real-time threat intelligence:

{
  "rule_group": "AttackInfrastructure",
  "action": "DROP",
  "threat_signatures": 1500000,
  "update_source": "AWS Threat Intel"
}

Rules target C2 infrastructure, exploit kits, and phishing domains. Capacity consumption appears in console metrics, enabling budget planning. Organizations can toggle to ALERT mode for forensic analysis before enforcement.

Operational Dashboard and Metrics

The enhanced dashboard displays:

  • Top talkers by bytes/packets
  • Rule group utilization
  • Threat signature matches
  • Endpoint health status
SELECT source_ip, sum(bytes) 
FROM firewall_logs 
WHERE action = 'DROP' 
GROUP BY source_ip 
ORDER BY 2 DESC LIMIT 10

CloudWatch integration enables alerting on anomalous patterns.

Deployment Best Practices

Reference architectures include:

  1. Centralized Egress: Internet-bound traffic via TGW to shared firewall
  2. Distributed Ingress: Public ALB → firewall endpoint → application VPC
  3. Hybrid Connectivity: Site-to-Site VPN through firewall inspection

Terraform modules automate endpoint creation, policy attachment, and logging configuration.

Conclusion: Simplified Security at Scale

The enhancements transform Network Firewall from complex appliance management into a cloud-native security fabric. Transit Gateway integration eliminates topology constraints, automated domain lists reduce rule maintenance, and active threat defense blocks known bad actors at line rate. Organizations achieve consistent, scalable protection without sacrificing operational agility.

Links:

PostHeaderIcon [NDCOslo2024] Choosing The Best AWS Service For Your Website + API – Brandon Minnick

In the sprawling spectrum of cloud solutions, where a plethora of platforms perplex even the seasoned, Brandon Minnick, an AWS architect and mobile maestro, navigates the nebulous nebula of Amazon’s offerings. As a developer advocate with a penchant for demystifying deployment, Brandon dissects the dizzying array of AWS services—Lambda, Elastic Beanstalk, Lightsail, Amplify, S3—distilling their distinct domains to guide builders toward bespoke backends. His exploration, enriched with empirical evaluations, empowers enterprises to align ambition with architecture, balancing cost, celerity, and scalability.

Brandon begins with a confession: his own odyssey, as a mobile maestro thrust into AWS’s vast vault, was overwhelmed by options—acronyms and aliases abounding. His mission: map the maze, matching motives to mechanisms, ensuring websites and APIs ascend with alacrity.

Decoding the Domain: AWS’s Hosting Horizons

AWS’s arsenal is abundant: S3 stores static simplicities, buckets brimming with bits; Amplify augments apps, knitting frontends to functions. Brandon breaks down the basics: Elastic Beanstalk builds bridges, automating infrastructure; Lightsail lightens loads, offering preconfigured planes; Lambda launches lean, serverless scripts scaling seamlessly.

Each excels in its enclave: S3’s simplicity suits static sites, Amplify’s agility aids authenticated apps, Lambda’s litheness loves lightweight logic. Brandon’s benchmark: cost—S3’s cents versus Lambda’s low levies; speed—CloudFront’s celerity; scale—Fargate’s fluidity.

Cost and Celerity: Calculating the Calculus

Price predicates priority: S3’s storage starts at sub-dollar sums, Lambda’s invocations linger at $0.20 per million, Amplify’s adaptability aligns at $0.023 per GB. Brandon’s breakdown: static sites savor S3’s thrift, dynamic domains demand Amplify’s depth—authentication via Cognito, APIs via API Gateway.

Performance pulses: CloudFront’s CDN cuts latency to 300ms, Lambda’s cold starts cede to containers’ constancy. Brandon advises: weigh user whims—300ms matters for markets, less for leisurely loads.

Scalability and Simplicity: Structuring for Surge

Scalability shapes success: Lambda’s limitless leaps, Fargate’s fleet-footed fleets, Beanstalk’s balanced ballast. Brandon illustrates: API Gateway guards gates, throttling torrents; Amplify’s auto-scaling absolves administrative aches.

Simplicity seals the deal: Lightsail’s one-click launches lure lone developers; Amplify’s abstractions attract architects. Brandon’s beacon: start small—S3 for static, scale to Amplify for ambition.

Strategic Selection: Synthesizing Solutions

Brandon’s synthesis: match mission to mechanism—S3 for static starters, Amplify for authenticated ascents, Lambda for lean logic. His counsel: consult AWS’s compendium—getting-started guides, web app wisdom—curated for clarity.

His clarion: choose consciously, calibrating cost, celerity, scalability—AWS’s arsenal awaits.

Links:

PostHeaderIcon Can GraalVM Outperform C++ for Real-Time Performance? A Deep Technical Analysis

(long answer to this comment on LinkedIn)

As GraalVM continues to mature, a recurring question surfaces among architects and performance engineers: can it realistically outperform traditional C++ in real-time systems?

The answer is nuanced. While GraalVM represents a major leap forward for managed runtimes, it does not fundamentally overturn the performance model that gives C++ its edge in deterministic environments. However, the gap is narrowing in ways that materially change architectural decisions.

Reframing the Question: What Do We Mean by “Real-Time”?

Before comparing technologies, it is critical to define “real-time.” In engineering practice, this term is frequently overloaded.

There are two distinct categories:

  • Hard real-time: strict guarantees on worst-case latency (e.g., missing a deadline is a system failure)
  • Soft real-time: latency matters, but occasional deviations are acceptable

Most backend systems fall into the second category, even when they are described as “low-latency.” This distinction is essential because it directly determines whether GraalVM is even a viable candidate.

Execution Models: Native vs Managed

C++: Deterministic by Design

C++ provides a minimal abstraction over hardware:

  • Ahead-of-time (AOT) compilation to native code
  • No implicit garbage collection
  • Full control over memory layout and allocation strategies
  • Predictable interaction with CPU caches and NUMA characteristics

This enables precise control over latency, which is why C++ dominates in domains such as embedded systems, game engines, and high-frequency trading infrastructure.

GraalVM: A Spectrum of Execution Modes

GraalVM is not a single execution model but a platform offering multiple strategies:

  • JIT mode (JVM-based): dynamic compilation with runtime profiling
  • Native Image (AOT): static compilation into a standalone binary
  • Polyglot execution: interoperability across languages

Each mode introduces different trade-offs in terms of startup time, peak performance, and latency stability.

JIT Compilation: Peak Performance vs Predictability

GraalVM’s JIT compiler is one of its strongest assets. It performs deep optimizations based on runtime profiling, including:

  • Inlining across abstraction boundaries
  • Escape analysis and allocation elimination
  • Speculative optimizations with deoptimization fallback

In long-running services, this can produce highly optimized machine code that rivals native implementations.

However, this optimization model introduces variability:

  • Warmup phase: performance improves over time
  • Deoptimization events: speculative assumptions can be invalidated
  • Compilation overhead: CPU cycles are consumed by the compiler itself

For systems requiring stable latency from the first request, this behavior is inherently problematic.

Native Image: Reducing the Gap

GraalVM Native Image shifts compilation to build time, eliminating JIT behavior at runtime. This results in:

  • Fast startup times
  • Lower memory footprint
  • Reduced latency variance

However, these benefits come with trade-offs:

  • Loss of dynamic optimizations available in JIT mode
  • Restrictions on reflection and dynamic class loading
  • Generally lower peak performance compared to JIT-optimized code

Even in this mode, C++ retains advantages in fine-grained memory control and instruction-level optimization.

Garbage Collection and Latency

Garbage collection is one of the most significant differentiators between GraalVM and C++.

Modern collectors (e.g., G1, ZGC, Shenandoah) have dramatically reduced pause times, but they do not eliminate them entirely. More importantly, they introduce uncertainty:

  • Pause times may vary depending on allocation patterns
  • Concurrent phases still compete for CPU resources
  • Memory pressure can trigger unexpected behavior

In contrast, C++ allows engineers to:

  • Use stack allocation or object pools
  • Avoid heap allocation in critical paths
  • Guarantee upper bounds on allocation latency

This difference is decisive in hard real-time systems.

Microarchitectural Considerations

At the highest level of performance engineering, factors such as cache locality, branch prediction, and instruction pipelines dominate.

C++ offers direct control over:

  • Data layout (AoS vs SoA)
  • Alignment and padding
  • SIMD/vectorization strategies

While GraalVM’s JIT can optimize some of these aspects automatically, it operates under constraints imposed by the language and runtime. As a result, it cannot consistently match the level of control available in C++.

Latency Profiles: A Practical Comparison

From a systems perspective, the difference can be summarized as follows:

Characteristic C++ GraalVM (JIT) GraalVM (Native Image)
Startup Time Fast Slow Very fast
Peak Throughput Excellent Excellent (after warmup) Good
Latency Predictability Excellent Moderate Good
Memory Control Full Limited Limited

Where GraalVM Is a Strong Choice

Despite its limitations in strict real-time environments, GraalVM excels in several domains:

Low-Latency Microservices

Native Image significantly reduces cold start times and memory usage, making it ideal for containerized workloads and serverless environments.

High-Throughput Systems

In long-running services, JIT optimizations can deliver excellent throughput with acceptable latency characteristics.

Polyglot Architectures

GraalVM enables seamless interoperability across multiple languages, simplifying system design in heterogeneous environments.

Developer Productivity

Compared to C++, the Java ecosystem offers faster iteration, richer tooling, and lower cognitive overhead for most teams.

Where C++ Remains Unmatched

C++ continues to dominate in scenarios where performance constraints are absolute:

  • Hard real-time systems (avionics, medical devices, robotics)
  • High-frequency trading engines with microsecond budgets
  • Game engines and real-time rendering pipelines
  • High-performance computing (HPC)

In these domains, even minor unpredictability is unacceptable, and the control offered by C++ is indispensable.

Strategic Takeaway

The most important shift is not that GraalVM surpasses C++, but that it redefines the boundary where managed runtimes are viable.

Historically, many systems defaulted to C++ purely for performance reasons. Today, GraalVM enables teams to achieve sufficiently high performance with significantly better developer productivity and ecosystem support.

This changes the optimization calculus:

  • Use C++ when you need guarantees
  • Use GraalVM when you need performance and agility

Conclusion

GraalVM does not replace C++ in real-time systems—but it does erode its dominance in adjacent domains.

For hard real-time applications, C++ remains the gold standard due to its deterministic execution model and fine-grained control over system resources.

For everything else, the decision is no longer obvious. GraalVM offers a compelling middle ground, delivering strong performance while dramatically improving developer velocity.

In modern system design, that trade-off is often more valuable than raw speed alone.

PostHeaderIcon [GoogleIO2025] What’s new in Android

Keynote Speakers

John Zoeller operates as a Developer Relations Engineer at Google, advocating for Wear OS and high-quality Android experiences. Educated at the University of Washington, he shares insights on code documentation and platform integrations to foster developer communities.

Jingyu Shi functions as a Developer Relations Engineer at Google, specializing in AI Edge technologies for Android. With a background from Columbia University, she guides developers in deploying on-device models and enhancing intelligent app features.

Jolanda Verhoef serves as a Developer Relations Engineer at Google, specializing in Android development with a focus on Jetpack Compose and user interface tooling. Based in Utrecht, she advocates for modern UI practices, drawing from her education at the University of Utrecht to educate developers on building efficient, adaptive applications.

Abstract

This comprehensive inquiry examines forthcoming Android 16 capabilities and developmental trajectories, focusing on crafting superior applications across varied hardware, including wearables, televisions, and automotive systems. It dissects integrations of AI via Gemini models, productivity boosts through Jetpack Compose and Kotlin Multiplatform, and Gemini-assisted tooling in Android Studio. By analyzing methodologies for on-device intelligence, media handling, and cross-platform logic, the discussion appraises contexts of user delight and developer velocity, with ramifications for scalable, privacy-conscious software engineering.

Productivity Amplifications in Development Tooling

Jolanda Verhoef commences by chronicling Jetpack Compose’s ascent, now adopted by 60% of premier apps for its declarative prowess. She delineates enhancements accelerating workflows, such as autofill via semantics rewrites, autosizing text for adaptive displays, and animateBounds for seamless transitions.

Visibility APIs like onLayoutRectChanged enable efficient tracking, with alpha extensions for fractional visibility aiding media optimizations. Performance surges from compiler skips and UI refinements yield 20-30% gains, while stability purges 32% of experimental APIs.

Navigation 3 rethinks routing with Compose primitives, supporting adaptive architectures. Media3 and CameraX offer modular composables, as in Androidify’s video tutorials.

Jingyu Shi introduces Kotlin Multiplatform (KMP) for shared logic across Android and iOS, stabilizing in Kotlin 2.0. Methodologies involve common modules for business rules, with platform-specific UI, implying reduced duplication and unified testing.

Code sample for KMP setup:

// commonMain/kotlin
expect class Platform() {
    val name: String
}

// androidMain/kotlin
actual class Platform {
    actual val name: String = "Android"
}

// iosMain/kotlin
actual class Platform {
    actual val name: String = "iOS"
}

Implications encompass streamlined maintenance, though require ecosystem maturity for full parity.

AI Integrations for Intelligent Experiences

Shi emphasizes on-device AI via Gemini Nano and cloud access, liberating from server dependencies. GenAI APIs handle text/image tasks with minimal code, expanding to multimodal interactions.

Gemini Live API via Firebase enables bidirectional audio, fostering agentic apps. Home APIs incorporate Gemini for smart automations, accessing 750 million devices.

Methodologies prioritize privacy in on-device processing, with implications for real-time personalization sans latency. Contexts include solving tangible issues, like fitness tracking or content generation.

Media and Camera Advancements for Rich Interactions

Updates in Jetpack Media3 and CameraX facilitate effects sharing for grayscale filters across capture and editing. Low-light boosts via ML extend brightness adjustments to broader hardware.

PreloadManager optimizes short-form video feeds, reducing startups for swipeable interfaces. Native PCM offload in NDK conserves battery during audio playback by delegating to DSPs.

Professional features in Android 16 enhance creator tools, implying elevated content quality across ecosystems.

Cross-Device Excellence and Future Paradigms

John Zoeller (implied in Wear OS focus) and speakers advocate multi-form factor designs, with Android 16’s live updates and Material 3 Expressive for engaging UIs.

Implications span unified experiences, with AI as the differentiator for “wow” moments, urging ethical, performant integrations.

Links: