Posts Tagged ‘GraalVM’
[SpringIO2024] Serverless Java with Spring by Maximilian Schellhorn & Dennis Kieselhorst @ Spring I/O 2024
Serverless computing has transformed application development by abstracting infrastructure management, offering fine-grained scaling, and billing only for execution time. At Spring I/O 2024 in Barcelona, Maximilian Schellhorn and Dennis Kieselhorst, AWS Solutions Architects, shared their expertise on building serverless Java applications with Spring. Their session explored running existing Spring Boot applications in serverless environments and developing event-driven applications using Spring Cloud Function, with a focus on performance optimizations and practical tooling.
The Serverless Paradigm
Maximilian began by contrasting traditional containerized applications with serverless architectures. Containers, while resource-efficient, require developers to manage orchestration, networking, and scaling. Serverless computing, exemplified by AWS Lambda, eliminates these responsibilities, allowing developers to focus on code. Maximilian highlighted four key promises: reduced operational overhead, automatic granular scaling, pay-per-use billing, and high availability. Unlike containers, which remain active and incur costs even when idle, serverless functions scale to zero, executing only in response to events like API requests or queue messages, optimizing cost and resource utilization.
Spring Cloud Function for Event-Driven Development
Spring Cloud Function simplifies serverless development by enabling developers to write event-driven applications as Java functions. Maximilian demonstrated how it leverages Spring Boot’s familiar features—autoconfiguration, dependency injection, and testing—while abstracting cloud-specific details. Functions receive event payloads (e.g., JSON from API Gateway or Kafka) and can convert them into POJOs, streamlining business logic implementation. The framework’s generic invoker supports function routing, allowing multiple functions within a single codebase, and enables local testing via HTTP endpoints. This portability ensures applications can target various serverless platforms without vendor lock-in, enhancing flexibility.
Adapting Existing Spring Applications
For teams with existing Spring Boot applications, Dennis introduced the AWS Serverless Java Container, an open-source library acting as an adapter to translate serverless events into Java Servlet requests. This allows REST controllers to function unchanged in a serverless environment. Version 2.0.2, released during the conference, supports Spring Boot 3 and integrates with Spring Cloud Function. Dennis emphasized its ease of use: add the library, configure a handler, and deploy. While this approach incurs some overhead compared to native functions, it enables rapid migration of legacy applications, preserving existing investments without requiring extensive rewrites.
Optimizing Performance with SnapStart and GraalVM
Performance, particularly cold start times, is a critical concern in serverless Java applications. Dennis addressed this by detailing AWS Lambda SnapStart, which snapshots the initialized JVM and micro-VM, reducing startup times by up to 80% without additional costs. SnapStart, integrated with Spring Boot 3.2’s CRaC (Coordinated Restore at Checkpoint) support, manages initialization hooks to handle resources like database connections. For further optimization, Maximilian discussed GraalVM native images, which compile Java code into binaries for faster startups and lower memory usage. However, GraalVM’s complexity and framework limitations make SnapStart the preferred starting point, with GraalVM reserved for extreme performance needs.
Practical Considerations and Tooling
Maximilian and Dennis stressed practical considerations, such as database connection management and observability. Serverless scaling can overwhelm traditional databases, necessitating connection pooling adjustments or proxies like AWS RDS Proxy. Observability in Lambda relies on a push model, integrating with tools like CloudWatch, X-Ray, or OpenTelemetry, though additional layers may impact performance. To aid adoption, they offered a Lambda Workshop and a Serverless Java Replatforming Guide, providing hands-on learning and written guidance. These resources, accessible via AWS accounts, empower developers to experiment and apply serverless principles effectively.
Links:
[DevoxxFR 2024] Going AOT: Mastering GraalVM for Java Applications
Alina Yurenko 🇺🇦 , a developer advocate at Oracle Labs, captivated audiences at Devoxx France 2024 with her deep dive into GraalVM’s ahead-of-time (AOT) compilation for Java applications. With a passion for open-source and community engagement, Alina explored how GraalVM’s Native Image transforms Java applications into compact, high-performance native executables, ideal for cloud environments. Through demos and practical guidance, she addressed building, testing, and optimizing GraalVM applications, debunking myths and showcasing its potential. This post unpacks Alina’s insights, offering a roadmap for adopting GraalVM in production.
GraalVM and Native Image Fundamentals
Alina introduced GraalVM as both a high-performance JDK and a platform for AOT compilation via Native Image. Unlike traditional JVMs, GraalVM allows developers to run Java applications conventionally or compile them into standalone native executables that don’t require a JVM at runtime. This dual capability, built on over a decade of research at Oracle Labs, offers Java’s developer productivity alongside native performance benefits like faster startup and lower resource usage. Native Image, GA since 2019, analyzes an application’s bytecode at build time, identifying reachable code and dependencies to produce a compact executable, eliminating unused code and pre-populating the heap for instant startup.
The closed-world assumption underpins this process: all application behavior must be known at build time, unlike the JVM’s dynamic runtime optimizations. This enables aggressive optimizations but requires careful handling of dynamic features like reflection. Alina demonstrated this with a Spring Boot application, which started in 1.3 seconds on GraalVM’s JVM but just 47 milliseconds as a native executable, highlighting its suitability for serverless and microservices where startup speed is critical.
Benefits Beyond Startup Speed
While fast startup is a hallmark of Native Image, Alina emphasized its broader advantages, especially for long-running applications. By shifting compilation, class loading, and optimization to build time, Native Image reduces runtime CPU and memory usage, offering predictable performance without the JVM’s warm-up phase. A Spring Pet Clinic benchmark showed Native Image matching or slightly surpassing the JVM’s C2 compiler in peak throughput, a testament to two years of optimization efforts. For memory-constrained environments, Native Image excels, delivering up to 2–3x higher throughput per memory unit at heap sizes of 512MB to 1GB, as seen in throughput density charts.
Security is another benefit. By excluding unused code, Native Image reduces the attack surface, and dynamic features like reflection require explicit allow-lists, enhancing control. Alina also noted compatibility with modern Java frameworks like Spring Boot, Micronaut, and Quarkus, which integrate Native Image support, and a community-maintained list of compatible libraries on the GraalVM website, ensuring broad ecosystem support.
Building and Testing GraalVM Applications
Alina provided a practical guide for building and testing GraalVM applications. Using a Spring Boot demo, she showcased the Native Maven plugin, which streamlines compilation. The build process, while resource-intensive for large applications, typically stays within 2GB of memory for smaller apps, making it viable on CI/CD systems like GitHub Actions. She recommended developing and testing on the JVM, compiling to Native Image only when adding dependencies or in CI/CD pipelines, to balance efficiency and validation.
Dynamic features like reflection pose challenges, but Alina outlined solutions: predictable reflection works out-of-the-box, while complex cases may require JSON configuration files, often provided by frameworks or libraries like H2. A centralized GitHub repository hosts configs for popular libraries, and a tracing agent can generate configs automatically by running the app on the JVM. Testing support is robust, with JUnit and framework-specific tools like Micronaut’s test resources enabling integration tests in Native mode, often leveraging Testcontainers.
Optimizing and Future Directions
To achieve peak performance, Alina recommended profile-guided optimizations (PGO), where an instrumented executable collects runtime profiles to inform a final build, combining AOT’s predictability with JVM-like insights. A built-in ML model predicts profiles for simpler scenarios, offering 6–8% performance gains. Other optimizations include using the G1 garbage collector, enabling machine-specific flags, or building static images for minimal container sizes with distroless images.
Looking ahead, Alina highlighted two ambitious GraalVM projects: Layered Native Images, which pre-compile base images (e.g., JDK or Spring) to reduce build times and resource usage, and GraalOS, a platform for deploying native images without containers, eliminating container overhead. Demos of a LangChain for Java app and a GitHub crawler using Java 22 features showcased GraalVM’s versatility, running seamlessly as native executables. Alina’s session underscored GraalVM’s transformative potential, urging developers to explore its capabilities for modern Java applications.
Links:
Hashtags: #GraalVM #NativeImage #Java #AOT #AlinaYurenko #DevoxxFR2024
[DevoxxBE 2023] The Great Divergence: Bridging the Gap Between Industry and University Java
At Devoxx Belgium 2023, Felipe Yanaga, a teaching assistant at the University of North Carolina at Chapel Hill and a Robertson Scholar, delivered a compelling presentation addressing the growing disconnect between the vibrant use of Java in industry and its outdated perception in academia. As a student with internships at Amazon and Google, and a fellow at UNC’s Computer Science Experience Lab, Felipe draws on his unique perspective to highlight how universities lag in teaching modern Java practices. His talk explores the reasons behind this divergence, the negative perceptions students hold about Java, and actionable steps to revitalize its presence in academic settings.
Java’s Strength in Industry
Felipe begins by emphasizing Java’s enduring relevance in the professional world. Far from the “Java is dead” narrative that periodically surfaces online, the language thrives in industry, powered by innovations like Quarkus, GraalVM, and a rapid six-month release cycle. Companies sponsoring Devoxx, such as Red Hat and Oracle, exemplify Java’s robust ecosystem, leveraging frameworks and tools that enhance developer productivity. For instance, Felipe references the keynote by Brian Goetz, which outlined Java’s roadmap, showcasing its adaptability to modern development needs by drawing inspiration from other languages. This continuous evolution ensures Java remains a cornerstone for enterprise applications, from microservices to large-scale systems.
However, Felipe points out a troubling trend: despite its industry strength, Java’s popularity is declining in metrics like GitHub’s language rankings and the TIOBE Index. While JavaScript and Python have surged, Java’s share of relevant Google searches has dropped from 26% in 2002 to under 10% by 2023. Felipe attributes this partly to a shift in academic settings, where the foundation for programming passion is often laid. The disconnect between industry innovation and university curricula is a critical issue that needs addressing to sustain Java’s future.
The Academic Lag: Java’s Outdated Image
In universities, Java’s reputation suffers from outdated teaching practices. Felipe notes that many institutions, including top U.S. universities, have shifted introductory courses from Java to Python, citing Java’s perceived complexity and age. A 2017 quote from a Stanford professor illustrates this sentiment, claiming Java “shows its age” and prompting a move to Python for introductory courses. Surveys of 70 leading U.S. universities confirm this trend, with Python now dominating as the primary teaching language, while Java is relegated to data structures or object-oriented programming courses.
Felipe’s own experience at UNC-Chapel Hill reflects this shift. A decade ago, Java dominated the curriculum, but by 2023, Python had overtaken introductory and database courses. This transition reinforces a perception among students that Java is verbose, bloated, and outdated. Felipe conducted a survey among 181 students in a software engineering course, revealing stark insights: 42% believed Python was in highest industry demand, 67% preferred Python for building REST APIs, and terms like “tedious,” “boring,” and “outdated” dominated a word cloud describing Java. One student even remarked that Java is suitable only for maintaining legacy code, a sentiment that underscores the stigma Felipe aims to dismantle.
The On-Ramp Challenge: Simplifying Java’s Introduction
A significant barrier to Java’s adoption in academia is its steep learning curve for beginners. Felipe contrasts Python’s straightforward “hello world” with Java’s intimidating boilerplate code, such as public static void main. This complexity overwhelms novices, who grapple with concepts like classes and static methods without clear explanations. Instructors often dismiss these as “magic,” which disengages students and fosters a negative perception. Felipe highlights Java’s JEP 445, which introduces unnamed classes and instance main methods to reduce boilerplate, as a promising step to make Java more accessible. By simplifying the initial experience, such innovations could align Java’s on-ramp with Python’s ease, engaging students early and encouraging exploration.
Beyond the language itself, the Java ecosystem poses additional challenges. Installing Java is daunting for beginners, with multiple Oracle websites offering conflicting instructions. Felipe recounts his own struggle as a student, only navigating this thanks to his father’s guidance. Tools like SDKMan and JBang simplify installation and scripting, but these are often unknown to students outside the Java community. Similarly, choosing an IDE—IntelliJ, Eclipse, or VS Code—adds another layer of complexity. Felipe advocates for clear, standardized guidance, such as recommending SDKMan and IntelliJ, to streamline the learning process and make Java’s ecosystem more approachable.
Bridging the Divide: Community and Mentorship
To reverse the declining trend in academia, Felipe proposes actionable steps centered on community engagement. He emphasizes the need for industry professionals to connect with universities, citing examples like Tom from Info Support, who collaborates with local schools to demonstrate Java’s real-world applications. By mentoring students and updating professors on modern tools like Maven, Gradle, and Quarkus, industry can reshape Java’s image. Felipe also encourages inviting students to Java User Groups (JUGs), where they can interact with professionals and discover tools that enhance Java development. These initiatives, he argues, plant seeds of enthusiasm that students will share with peers, amplifying Java’s appeal.
Felipe stresses that small actions, like a 10-minute conversation with a student, can make a significant impact. By demystifying stereotypes—such as Java being slow or bloated—and showcasing frameworks like Quarkus with hot reload capabilities, professionals can counter misconceptions. He also addresses the lack of Java-focused workshops compared to Python and JavaScript, urging the community to actively reach out to students. This collective effort, Felipe believes, is crucial to ensuring the next generation of developers sees Java as a vibrant, modern language, not a relic of the past.
Links:
-
University of North Carolina at Chapel Hill
-
Duke University
Hashtags: #Java #SoftwareDevelopment #Education #Quarkus #GraalVM #UNCChapelHill #DukeUniversity #FelipeYanaga
[SpringIO2023] Going Native: Fast and Lightweight Spring Boot Applications with GraalVM
At Spring I/O 2023 in Barcelona, Alina Yurenko, a developer advocate at Oracle Labs, captivated the audience with her deep dive into GraalVM Native Image support for Spring Boot 3.0. Her session, a blend of technical insights, live demos, and community engagement, showcased how GraalVM transforms Spring Boot applications into fast-starting, lightweight native executables that eliminate the need for a JVM. By leveraging GraalVM’s ahead-of-time (AOT) compilation, developers can achieve significant performance gains, reduced memory usage, and enhanced security, making it a game-changer for cloud-native deployments.
GraalVM: Beyond a Traditional JDK
Alina began by demystifying GraalVM, a versatile platform that extends beyond a standard JDK. While it can run Java applications using the OpenJDK HotSpot VM with an optimized Graal compiler, the spotlight was on its Native Image feature. This AOT compilation process converts a Spring Boot application into a standalone native executable, stripping away runtime code loading and compilation. The result? Applications that start in fractions of a second and consume minimal memory. Alina emphasized that GraalVM’s ability to include only reachable code—application logic, dependencies, and necessary JDK classes—reduces binary size and enhances efficiency, a critical advantage for cloud environments where resources are costly.
Performance and Resource Efficiency in Action
Through live demos, Alina illustrated GraalVM’s impact using the Spring Pet Clinic application. On her laptop, the JVM version took 1.5 seconds to start, while the native executable launched in just 0.3 seconds—a fivefold improvement. The native version was also significantly smaller, at roughly 50 MB without compression, compared to the JVM’s bulkier footprint. To stress-test performance, Alina ran a million requests against a simple Spring Boot app, comparing JVM and native modes. The JVM achieved 80k requests per second, while the native image hit 67k. However, with profile-guided optimizations (PGO), which mimic JVM’s runtime profiling at build time, the optimized native version reached 81k requests per second, rivaling JVM peak throughput. These demos underscored GraalVM’s ability to balance startup speed, low memory usage, and competitive throughput.
Security and Compact Packaging
Alina highlighted GraalVM’s security benefits, noting that native images eliminate runtime code loading, reducing attack vectors like those targeting just-in-time compilation. Only reachable code is included, minimizing the risk of unused dependencies introducing vulnerabilities. Dynamic features like reflection require explicit configuration, ensuring deliberate control over runtime behavior. On packaging, Alina showcased how native images can be compressed using tools like UPX, achieving sizes as low as a few megabytes, though she cautioned about potential runtime decompression trade-offs. These features make GraalVM ideal for deploying compact, secure applications in constrained environments like Kubernetes or serverless platforms.
Practical Integration with Spring Boot
The session also covered GraalVM’s seamless integration with Spring Boot 3.0, which graduated Native Image support from the experimental Spring Native project to general availability in November 2022. Spring Boot’s AOT processing step optimizes applications for native compilation, reducing reflective calls and generating configuration files for GraalVM. Alina demonstrated how Maven and Gradle plugins, along with the GraalVM Reachability Metadata Repository, simplify builds by automatically handling library configurations. For developers, this means minimal changes to existing workflows, with tools like the tracing agent and Spring’s runtime hints easing the handling of dynamic features. Alina’s practical advice—develop on the JVM for fast feedback, then compile to native in CI/CD pipelines—resonated with attendees aiming to adopt GraalVM.
Links:
[SpringIO2022] Ahead Of Time and Native in Spring Boot 3.0
At Spring I/O 2022 in Barcelona, Brian Clozel and Stéphane Nicoll, both engineers at VMware, delivered a comprehensive session on Ahead Of Time (AOT) processing and native compilation in Spring Boot 3.0 and Spring Framework 6.0. Their talk explored the integration of GraalVM native capabilities, detailing the AOT engine’s design, its use by libraries, and practical steps for developers. Through a live demo, they showcased how to transform a Spring application into a native binary, highlighting performance gains and configuration challenges.
GraalVM Native Compilation: Core Concepts
Brian opened by introducing GraalVM, a versatile JVM supporting multiple languages and optimized Just-In-Time (JIT) compilation. The talk focused on its native compilation feature, which transforms Java applications into standalone binaries for specific CPU architectures. This process involves static analysis at build time, processing all classes on a fixed classpath, and determining reachable code. Benefits include memory efficiency (megabytes instead of gigabytes), millisecond startup times, and suitability for CLI tools, serverless functions, and high-density container deployments.
However, challenges exist. Static analysis may require additional reachability metadata for reflection or resources, as GraalVM cannot always infer runtime behavior. Brian demonstrated a case where reflection-based method invocation fails without metadata, as the native image excludes unreachable code. Debugging is less straightforward than with traditional JVMs, and Java agents, like OpenTelemetry, are unsupported. The speakers emphasized that AOT aims to bridge these gaps, making native compilation accessible for Spring applications.
Spring’s AOT Engine: Design and Integration
Stéphane detailed the AOT engine, a core component of Spring Framework 6.0-M4 and Spring Boot 3.0-M3, designed to preprocess application configurations at build time. Unlike annotation processors, it operates post-compilation, analyzing the bean factory and generating Java code to replace dynamic configuration parsing. This code, viewable in modern IDEs like IntelliJ, mimics hand-written configurations but is automatically generated, preserving package visibility and including Javadoc for clarity.
The engine supports two approaches: contributing reachability metadata for reflection or resources, or generating code to simplify static analysis. For example, a demo CLI application used Spring’s RuntimeHints API to register reflection for a SimpleHelloService class and include a classpath resource. The native build tools Gradle plugin, provided by the GraalVM team, integrates with Spring Boot’s plugin to trigger AOT processing and native compilation. Stéphane showed how the generated binary achieved rapid startup and low memory usage, with configuration classes handled automatically by the AOT engine.
Developer Cookbook: Making Applications Native-Ready
The speakers introduced a developer cookbook to guide Spring users toward native compatibility. The first step is running the application in AOT mode on the JVM, validating the engine’s understanding of the configuration without native compilation. This mode pre-processes the bean factory, reducing startup time and exposing issues early. Next, developers should reuse existing test suites, adapting them for AOT using generated sources and JUnit support. This identifies missing metadata, such as reflection or resource hints.
For third-party libraries or custom code, developers can contribute hints via the RuntimeHints API or validate them using a forthcoming Java agent. The GraalVM team is developing a reachability metadata repository, where the Spring team is contributing hints for popular libraries, reducing manual configuration. For advanced cases, developers can hook into the AOT engine to generate custom code, supported by a test compiler API to verify outcomes. Brian emphasized balancing hints and code generation, favoring simplicity unless performance demands otherwise.
Future Directions and Community Collaboration
The talk concluded with a roadmap for Spring Boot 3.0 and Spring Framework 6.0, targeting general availability by late 2022. The current milestones provide robust AOT infrastructure, with future releases expanding support for Spring libraries. The speakers highlighted collaboration with the GraalVM team to simplify native adoption and plans to align with Project Leyden for JVM optimizations. They encouraged feedback via the Spring I/O app and invited developers to explore the demo repository, which includes Maven and Gradle configurations.
This session equipped developers with tools to leverage AOT and native compilation, unlocking new use cases like serverless and high-density deployments while maintaining Spring’s developer-friendly ethos.
Links:
[SpringIO2022] JobRunr: Simplifying Distributed Job Scheduling with Spring
At Spring I/O 2022 in Barcelona, Ronald Dehuysser introduced JobRunr, an open-source Java library designed to streamline distributed background job processing. His engaging session, blending practical insights with live coding, showcased how JobRunr empowers developers to transform Java 8 lambdas into scalable, fault-tolerant jobs without complex infrastructure. Tailored for businesses handling moderate data volumes, Ronald’s talk highlighted JobRunr’s seamless integration with Spring and its potential to revolutionize job scheduling.
The Genesis of JobRunr: Solving Real-World Challenges
Ronald, a contractor from Belgium, kicked off by sharing the origins of JobRunr, born from a challenging “greenfield” fintech project. Tasked with building an invoicing platform on Google Cloud, he encountered a microservice architecture plagued by issues: no retry mechanisms, poor monitoring, and lost invoices due to untracked failures. The project’s eight microservices led to code duplication, prompting Ronald to question the microservice hype and advocate for simpler, modular monoliths. Frustrated by the lack of developer-friendly, open-source job scheduling tools, he created JobRunr to address these gaps, emphasizing ease of use, existing infrastructure, and automatic retries.
JobRunr’s philosophy is rooted in simplicity and practicality. Unlike solutions requiring heavy infrastructure like Apache Kafka or vendor-specific cloud services, JobRunr leverages SQL or NoSQL databases for persistence, making it embeddable with a single JAR. Ronald stressed that most businesses don’t need to process terabytes daily like tech giants. Instead, JobRunr targets complex business processes with gigabytes of data, offering a plug-and-play solution with built-in monitoring and fault tolerance.
Core Features: From Lambdas to Distributed Jobs
The heart of JobRunr lies in its ability to convert Java 8 lambdas into distributed background jobs. Ronald demonstrated this with a Spring service example, where a static BackgroundJob.enqueue method schedules jobs without altering existing code. Jobs are serialized as JSON, stored in a database, and processed by BackgroundJobServer instances across JVMs, enabling horizontal scaling in Kubernetes. A dashboard provides real-time insights into job status, with automatic retries (up to 10 by default) using an exponential backoff policy to handle failures gracefully.
For scheduling flexibility, JobRunr supports immediate, delayed, or recurring jobs. Ronald showcased the schedule API for jobs running after a delay (e.g., 24 hours) and the scheduleRecurrently method for daily tasks, using a readable Cron class to simplify cron expressions. The dashboard allows manual triggering of recurring jobs for testing, enhancing developer control. To prevent duplicate processing, JobRunr offers mutex support, though advanced features like this are part of the paid Pro version, balancing open-source accessibility with sustainability.
Under the Hood: Bytecode Magic and Spring Native
Delving into JobRunr’s internals, Ronald revealed its use of ASM for bytecode manipulation, translating lambdas into executable jobs. While some criticized this as “black magic,” he countered with assurances of binary compatibility, backed by Oracle’s Java Language Specification and his participation in Oracle’s Quality Outreach Program. JobRunr’s compatibility spans Java 8 to 17, tested across JVMs using Testcontainers, ensuring robustness. The introduction of JobRequest and JobRequestHandler in version 4 further simplifies job definition, aligning with the command handler pattern for explicit job processing.
A highlight was JobRunr’s integration with Spring Native, enabling compilation to GraalVM native images for millisecond startup times and low memory usage. Ronald collaborated with the Spring team to ensure reflection compatibility, making JobRunr a natural fit for cloud-native deployments. The live coding demo, despite minor hiccups, showcased JobRunr’s ease of use: Ronald built an uptime monitoring service, scheduling recurring website checks with a few lines of code, monitored via the dashboard. This practicality resonated with attendees, who appreciated JobRunr’s developer-friendly approach.
Impact and Future: Empowering Developers
JobRunr’s adoption spans medical image processing, web crawling, and document generation, with 30,000 monthly Maven downloads. Ronald shared a compelling anecdote: a company reported a 20% developer productivity boost by using the dashboard’s requeue feature for first-line support, reducing interruptions. Looking ahead, JobRunr aims to enhance GraalVM support, add OpenID Connect for dashboard authentication, and incorporate community-driven features. The Pro version funds development, with 5% of profits supporting environmental causes like tree planting.
Ronald’s session underscored JobRunr’s mission to simplify distributed job scheduling, making it an invaluable tool for Spring developers tackling real-world business challenges with minimal overhead.
Links:
[DevoxxFR2012] “Obésiciel” and Environmental Impact: Green Patterns Applied to Java – Toward Sustainable Computing
Olivier Philippot is an electronics and computer engineer with over a decade of experience in energy management systems and sustainable technology design. Having worked in R&D labs and large industrial groups, he has dedicated his career to understanding the environmental footprint of digital systems. A founding member of the French Green IT community, Olivier contributes regularly to GreenIT.fr, participates in AFNOR working groups on eco-design standards, and trains organizations on sustainable IT practices. His work bridges hardware, software, and policy to reduce the carbon intensity of computing.
This article presents a comprehensively expanded analysis of Olivier Philippot’s 2012 DevoxxFR presentation, Obésiciel and Environmental Impact: Green Patterns Applied to Java, reimagined as a foundational text on software eco-design and technical debt’s environmental cost. The talk introduced the concept of obésiciel, software that grows increasingly resource-hungry with each release, driving premature hardware obsolescence. Philippot revealed a startling truth: manufacturing a single computer emits seventy to one hundred times more CO2 than one year of use, yet software bloat has tripled performance demands every five years, reducing average PC lifespan from six to two years.
Through Green Patterns, JVM tuning strategies, data efficiency techniques, and lifecycle analysis, this piece offers a practical framework for Java developers to build lighter, longer-lived, and lower-impact applications. Updated for 2025, it integrates GraalVM native images, Project Leyden, energy-aware scheduling, and carbon-aware computing, providing a complete playbook for sustainable Java development.
The Environmental Cost of Software Bloat
Manufacturing a laptop emits two hundred to three hundred kilograms of CO2 equivalent. The use phase emits twenty to fifty kilograms per year. Software-driven obsolescence forces upgrades every two to three years. Philippot cited Moore’s Law irony: while transistors double every eighteen months, software efficiency has decreased due to abstraction layers, framework overhead, and feature creep.
Green Patterns for Data Efficiency
Green Patterns for Java include data efficiency. String concatenation in loops is inefficient:
String log = "";
for (String s : list) log += s;
Use StringBuilder instead:
StringBuilder sb = new StringBuilder();
for (String s : list) sb.append(s);
Also use compression, binary formats like Protocol Buffers, and lazy loading.
JVM Tuning for Energy Efficiency
JVM optimization includes:
-XX:+UseZGC
-XX:ReservedCodeCacheSize=128m
-XX:+UseCompressedOops
-XX:+UseContainerSupport
GraalVM Native Image reduces memory by ninety percent, startup to fifty milliseconds, and energy by sixty percent.
Carbon-Aware Computing in 2025
EDIT:
In 2025, carbon-aware Java includes Project Leyden for static images without warmup, energy profilers like JFR and PowerAPI, cloud carbon APIs from AWS and GCP, and edge deployment to reduce data center hops.
Links
Relevant links include GreenIT.fr at greenit.fr, GraalVM Native Image at graalvm.org/native-image, and the original video at YouTube: Obésiciel and Environmental Impact.