Recent Posts
Archives

Archive for the ‘en-US’ Category

PostHeaderIcon [DevoxxFR2012] “Obésiciel” and Environmental Impact: Green Patterns Applied to Java – Toward Sustainable Computing

Olivier Philippot is an electronics and computer engineer with over a decade of experience in energy management systems and sustainable technology design. Having worked in R&D labs and large industrial groups, he has dedicated his career to understanding the environmental footprint of digital systems. A founding member of the French Green IT community, Olivier contributes regularly to GreenIT.fr, participates in AFNOR working groups on eco-design standards, and trains organizations on sustainable IT practices. His work bridges hardware, software, and policy to reduce the carbon intensity of computing.

This article presents a comprehensively expanded analysis of Olivier Philippot’s 2012 DevoxxFR presentation, Obésiciel and Environmental Impact: Green Patterns Applied to Java, reimagined as a foundational text on software eco-design and technical debt’s environmental cost. The talk introduced the concept of obésiciel, software that grows increasingly resource-hungry with each release, driving premature hardware obsolescence. Philippot revealed a startling truth: manufacturing a single computer emits seventy to one hundred times more CO2 than one year of use, yet software bloat has tripled performance demands every five years, reducing average PC lifespan from six to two years.

Through Green Patterns, JVM tuning strategies, data efficiency techniques, and lifecycle analysis, this piece offers a practical framework for Java developers to build lighter, longer-lived, and lower-impact applications. Updated for 2025, it integrates GraalVM native images, Project Leyden, energy-aware scheduling, and carbon-aware computing, providing a complete playbook for sustainable Java development.

The Environmental Cost of Software Bloat

Manufacturing a laptop emits two hundred to three hundred kilograms of CO2 equivalent. The use phase emits twenty to fifty kilograms per year. Software-driven obsolescence forces upgrades every two to three years. Philippot cited Moore’s Law irony: while transistors double every eighteen months, software efficiency has decreased due to abstraction layers, framework overhead, and feature creep.

Green Patterns for Data Efficiency

Green Patterns for Java include data efficiency. String concatenation in loops is inefficient:

String log = "";
for (String s : list) log += s;

Use StringBuilder instead:

StringBuilder sb = new StringBuilder();
for (String s : list) sb.append(s);

Also use compression, binary formats like Protocol Buffers, and lazy loading.

JVM Tuning for Energy Efficiency

JVM optimization includes:

-XX:+UseZGC
-XX:ReservedCodeCacheSize=128m
-XX:+UseCompressedOops
-XX:+UseContainerSupport

GraalVM Native Image reduces memory by ninety percent, startup to fifty milliseconds, and energy by sixty percent.

Carbon-Aware Computing in 2025

EDIT:
In 2025, carbon-aware Java includes Project Leyden for static images without warmup, energy profilers like JFR and PowerAPI, cloud carbon APIs from AWS and GCP, and edge deployment to reduce data center hops.

Links

Relevant links include GreenIT.fr at greenit.fr, GraalVM Native Image at graalvm.org/native-image, and the original video at YouTube: Obésiciel and Environmental Impact.

PostHeaderIcon [DevoxxBE2012] On the Road to JDK 8: Lambda, Parallel Libraries, and More

Joseph Darcy, a key figure in Oracle’s JDK engineering team, presented an insightful overview of JDK 8 developments. With extensive experience in language evolution, including leading Project Coin for JDK 7, Joseph outlined the platform’s future directions, balancing innovation with compatibility.

He began by contextualizing JDK 8’s major features, particularly lambda expressions and default methods, set for release in September 2013. Joseph polled the audience on JDK usage, noting the impending end of public updates for JDK 6 and urging transitions to newer versions.

Emphasizing a quantitative approach to compatibility, Joseph described experiments analyzing millions of lines of code to inform decisions, such as lambda conversions from inner classes.

Evolving the Language with Compatibility in Mind

Joseph elaborated on the JDK’s evolution policy, prioritizing binary compatibility while allowing measured source and behavioral changes. He illustrated this with diagrams showing compatibility spaces for different release types, from updates to full platforms.

A core challenge, he explained, is evolving interfaces compatibly. Unlike classes, interfaces cannot add methods without breaking implementations. To address this, JDK 8 introduces default methods, enabling API evolution without user burden.

This ties into lambda support, where functional interfaces facilitate closures. Joseph contrasted this with past changes like generics, which preserved migration compatibility through erasure, avoiding VM modifications.

Lambda Expressions and Implementation Techniques

Diving into lambdas, Joseph defined them as anonymous methods capturing enclosing scope values. He traced their long journey into Java, noting their ubiquity in modern languages.

For implementation, Joseph rejected simple inner class translations due to class explosion and performance overhead. Instead, JDK 8 leverages invokedynamic from JDK 7, allowing runtime strategies like class spinning or method handles.

This indirection decouples binary representation from implementation, enabling optimizations. Joseph shared benchmarks showing non-capturing lambdas outperforming inner classes, especially multithreaded.

Serialization posed challenges, resolved via indirection to reconstruct lambdas independently of runtime details.

Parallel Libraries and Bulk Operations

Joseph highlighted how lambdas enable powerful libraries, abstracting behavior as generics abstract types. Streams introduce pipeline operations—filter, map, reduce—with laziness and fork-join parallelism.

Using the Fork/Join Framework from JDK 7, these libraries handle load balancing implicitly, encapsulating complexity. Joseph demonstrated conversions from collections to streams, facilitating scalable concurrent applications.

Broader JDK 8 Features and Future Considerations

Beyond lambdas, Joseph mentioned annotations on types and repeating annotations, enhancing expressiveness. He stressed deferring decisions to avoid constraining future evolutions, like potential method reference enhancements.

In summary, Joseph portrayed JDK 8 as a coordinated update across language, libraries, and VM, inviting community evaluation through available builds.

Links:

PostHeaderIcon [DevoxxFR2012] Node.js and JavaScript Everywhere – A Comprehensive Exploration of Full-Stack JavaScript in the Modern Web Ecosystem

Matthew Eernisse is a seasoned web developer whose career spans over fifteen years of building interactive, high-performance applications using JavaScript, Ruby, and Python. As a core engineer at Yammer, Microsoft’s enterprise social networking platform, he has been at the forefront of adopting Node.js for mission-critical services, contributing to a polyglot architecture that leverages the best tools for each job. Author of the influential SitePoint book Build Your Own Ajax Web Applications, Matthew has long championed JavaScript as a first-class language beyond the browser. A drummer, fluent Japanese speaker, and father of three living in San Francisco, he brings a unique blend of technical depth, practical experience, and cultural perspective to his work. His personal blog at fleegix.org remains a valuable archive of JavaScript patterns and web development insights.

This article presents an exhaustively elaborated, deeply extended, and comprehensively restructured expansion of Matthew Eernisse’s 2012 DevoxxFR presentation, Node.js and JavaScript Everywhere, transformed into a definitive treatise on the rise of full-stack JavaScript and its implications for modern software architecture. Delivered at a pivotal moment, just three years after Node.js’s initial release, the talk challenged prevailing myths about server-side JavaScript while offering a grounded, experience-driven assessment of its real-world benefits. Far from being a utopian vision of “write once, run anywhere,” Matthew argued that Node.js’s true power lay in its event-driven, non-blocking I/O model, ecosystem velocity, and developer productivity, advantages that were already reshaping Yammer’s backend services.

This expanded analysis delves into the technical foundations of Node.js, including the V8 engine, libuv, and the event loop, the architectural patterns that emerged at Yammer such as microservices, real-time messaging, and API gateways, and the cultural shifts required to adopt JavaScript on the server. It includes detailed code examples, performance benchmarks, deployment strategies, and lessons learned from production systems handling millions of users.

EDIT
In 2025 landscape, this piece integrates Node.js 20+, Deno, Bun, TypeScript, Server Components, Edge Functions, and WebAssembly, while preserving the original’s pragmatic, hype-free tone. Through rich narratives, system diagrams, and forward-looking speculation, this work serves as both a historical archive and a practical guide for any team evaluating JavaScript as a backend language.

Debunking the Myths of “JavaScript Everywhere”

The phrase JavaScript Everywhere became a marketing slogan that obscured the technology’s true value. Matthew opened his talk by debunking three common myths. First, the idea that developers write the same code on client and server is misleading. In reality, client and server have different concerns, security, latency, state management. Shared logic such as validation or formatting is possible, but full code reuse is rare and often anti-patterned. Second, the notion that Node.js is only for real-time apps is incorrect. While excellent for WebSockets and chat, Node.js excels in I/O-heavy microservices, API gateways, and data transformation pipelines, not just real-time. Third, the belief that Node.js replaces Java, Rails, or Python is false. At Yammer, Node.js was one tool among many. Java powered core services. Ruby on Rails drove the web frontend. Node.js handled high-concurrency, low-latency endpoints. The real win was developer velocity, ecosystem momentum, and operational simplicity.

The Node.js Architecture: Event Loop and Non-Blocking I/O

Node.js is built on a single-threaded, event-driven architecture. Unlike traditional threaded servers like Apache or Tomcat, Node.js uses an event loop to handle thousands of concurrent connections. A simple HTTP server demonstrates this:

const http = require('http');

http.createServer((req, res) => {
  setTimeout(() => {
    res.end('Hello after 2 seconds');
  }, 2000);
}).listen(3000);

While one request waits, the event loop processes others. This is powered by libuv, which abstracts OS-level async I/O such as epoll, kqueue, and IOCP. Google’s V8 engine compiles JavaScript to native machine code using JIT compilation. In 2012, V8 was already outperforming Ruby and Python in raw execution speed. Recently, V8 TurboFan and Ignition have pushed performance into Java and C# territory.

Yammer’s Real-World Node.js Adoption

In 2011, Yammer began experimenting with Node.js for real-time features, activity streams, notifications, and mobile push. By 2012, they had over fifty Node.js microservices in production, a real-time messaging backbone using Socket.IO, an API proxy layer routing traffic to Java and Rails backends, and a mobile backend serving iOS and Android apps. A real-time activity stream example illustrates this:

io.on('connection', (socket) => {
  socket.on('join', (room) => {
    socket.join(room);
    redis.subscribe(`activity:${room}`);
  });
});

redis.on('message', (channel, message) => {
  const room = channel.split(':')[1];
  io.to(room).emit('activity', JSON.parse(message));
});

This architecture scaled to millions of concurrent users with sub-100ms latency.

The npm Ecosystem and Developer Productivity

Node.js’s greatest strength is npm, the largest package registry in the world. In 2012, it had approximately twenty thousand packages. Now, It exceeds two and a half million. At Yammer, developers used Express.js for routing, Socket.IO for WebSockets, Redis for pub/sub, Mocha and Chai for testing, and Grunt, now Webpack or Vite, for builds. Developers could prototype a service in hours, not days.

Deployment, Operations, and Observability

Yammer ran Node.js on Ubuntu LTS with Upstart, now systemd. Services were containerized early using Docker in 2013. Monitoring used StatsD and Graphite, logging via Winston to ELK. A docker-compose example shows this:

version: '3'
services:
  api:
    image: yammer/activity-stream
    ports: ["3000:3000"]
    environment:
      - REDIS_URL=redis://redis:6379

The 2025 JavaScript Backend Landscape

EDIT:
The 2025 landscape includes Node.js 20 with ESM and Workers, Fastify and Hono instead of Express, native WebSocket API and Server-Sent Events instead of Socket.IO, Vite, esbuild, and SWC instead of Grunt, and async/await and Promises instead of callbacks. New runtimes include Deno, secure by default and TypeScript-native, and Bun, Zig-based with ten times faster startup. Edge platforms include Cloudflare Workers, Vercel Edge Functions, and AWS Lambda@Edge.

Matthew closed with a clear message: ignore the hype. Node.js is not a silver bullet. But for I/O-bound, high-concurrency, real-time, or rapid-prototype services, it is unmatched. In 2025, as full-stack TypeScript, server components, and edge computing dominate, his 2012 insights remain profoundly relevant.

Links

Relevant links include Matthew Eernisse’s blog at fleegix.org, the Yammer Engineering Blog at engineering.yammer.com, the Node.js Official Site at nodejs.org, and the npm Registry at npmjs.com. The original video is available at YouTube: Node.js and JavaScript Everywhere.

PostHeaderIcon [DevoxxBE2012] Spring 3.2 and 3.2 Themes and Trends

In a dynamic presentation, Josh Long, a prominent Spring developer advocate and author, delved into the evolving landscape of the Spring Framework. As someone deeply embedded in the Spring ecosystem, Josh highlighted how Spring continues to address modern development challenges while maintaining its core principles. He began by recapping the framework’s foundational aspects, emphasizing its role in promoting clean, extensible code without unnecessary reinvention.

Josh explained that Spring operates as a lightweight dependency injection container, layered with vertical technologies for diverse needs like mobile development, big data handling, and web applications. This decoupling from underlying infrastructure enables seamless transitions between environments, from traditional servers to cloud platforms. He noted the increasing complexity in data stores, caching solutions, and client interfaces, underscoring Spring’s relevance in today’s fragmented tech world. By focusing on dependency injection, aspect-oriented programming, and portable service abstractions, Spring empowers developers to build robust, maintainable systems.

Transitioning to recent advancements, Josh reviewed Spring 3.1, released in December 2011, which introduced features like environment profiles and Java-based configuration. These enhancements facilitate tailored bean activations across development stages, simplifying configurations that diverge between local setups and production clouds. He illustrated this with examples of data sources, showing how profiles partition configurations effectively.

Moreover, Josh discussed the caching abstraction in Spring 3.1, which provides a unified SPI for various caches like EHCache and Redis. This abstraction, combined with annotations for cache management, streamlines performance optimizations without locking developers into specific implementations.

Core Refinements in Spring 3.2

Shifting focus to Spring 3.2, slated for release by year’s end, Josh outlined its core refinements. Building on Java 7, it incorporates asynchronous support from Servlet 3.0, enabling efficient handling of long-running tasks in web applications. He demonstrated this with controller methods returning Callable or DeferredResult, allowing requests and responses to process in separate threads, enhancing scalability.

Josh also introduced the Spring MVC Test Framework, a tool for unit testing controllers with mocked servlet APIs. This framework, revamped for 3.2, integrates seamlessly with existing test contexts, promoting better code quality through isolated testing.

Additionally, upgrades to the Spring Expression Language (SpEL) and backported features from 3.1.x bolster the framework’s expressiveness and compatibility. Josh emphasized that these changes maintain Spring’s low-risk upgrade path, ensuring stability for enterprise adopters.

Looking Ahead to Spring 3.3

Josh then previewed Spring 3.3, expected in late 2013, which promises substantial innovations. Central to this release is support for Java SE 8 features, including lambdas, which align naturally with Spring’s single abstract method interfaces. He showcased how lambdas simplify callbacks in components like JdbcTemplate, reducing boilerplate code.

Furthermore, Josh touched on enhanced Groovy support and the integration of the Grails Bean Builder, expanding Spring’s appeal for dynamic languages. The release will also track Java EE 7 APIs, such as JCache 1.0 and JMS 2.0, with annotation-centric endpoints for message-driven architectures.

WebSocket support, crucial for real-time web applications, will be fully integrated into Spring MVC, complementing existing messaging capabilities in Spring Integration.

Strategic Motivations and Community Impact

Throughout his talk, Josh articulated the motivations behind Spring’s shorter release cycles, aiming to deliver timely features without overwhelming users. He stressed the framework’s alignment with emerging standards, positioning it as a bridge between Java SE 7/8 and EE 7.

Josh also shared insights into community contributions, mentioning the GitHub-based model and Gradle builds that foster collaboration. He encouraged feedback, highlighting his role in curating community resources like the weekly roundup on springsource.org.

In closing, Josh fielded questions on topics like bean metadata navigation and conditional caching, reinforcing Spring’s commitment to developer productivity. His enthusiasm underscored Spring’s enduring value in navigating the complexities of modern software engineering.

Links:

PostHeaderIcon [DevoxxFR2012] Lily: Big Data for Dummies – A Comprehensive Journey into Democratizing Apache Hadoop and HBase for Enterprise Java Developers

Lecturers

Steven Noels stands as one of the most visionary figures in the evolution of open-source Java ecosystems, having co-founded Outerthought in the early 2000s with a mission to push the boundaries of content management, RESTful architecture, and scalable data systems. His flagship creation, Daisy CMS, became a cornerstone for large-scale, multilingual content platforms used by governments and global enterprises, demonstrating that Java could power mission-critical, document-centric applications at internet scale. But Noels’ ambition extended far beyond traditional CMS. Recognizing the seismic shift toward big data in the late 2000s, he pivoted Outerthought—and later NGDATA—toward building tools that would make the Apache Hadoop ecosystem accessible to the average enterprise Java developer. Lily, launched in 2010, was the culmination of this vision: a platform that wrapped the raw power of HBase and Solr into a cohesive, Java-friendly abstraction layer, eliminating the need for MapReduce expertise or deep systems programming.

Bruno Guedes, an enterprise Java architect at SFEIR with over a decade of experience in distributed systems and search infrastructure, brought the practitioner’s perspective to the stage. Having worked with Lily from its earliest alpha versions, Guedes had deployed it in production environments handling millions of records, integrating it with legacy Java EE applications, Spring-based services, and real-time analytics pipelines. His hands-on experience—debugging schema migrations, tuning SolrCloud clusters, and optimizing HBase compactions—gave him unique insight into both the promise and the pitfalls of big data adoption in conservative enterprise settings. Together, Noels and Guedes formed a perfect synergy: the visionary architect and the battle-tested engineer, delivering a presentation that was equal parts inspiration and practical engineering.

Abstract

This article represents an exhaustively elaborated, deeply extended, and comprehensively restructured expansion of Steven Noels and Bruno Guedes’ seminal 2012 DevoxxFR presentation, “Lily, Big Data for Dummies”, transformed into a definitive treatise on the democratization of big data technologies for the Java enterprise. Delivered in a bilingual format that reflected the global nature of the Apache community, the original talk introduced Lily as a groundbreaking platform that unified Apache HBase’s scalable, distributed storage with Apache Solr’s full-text search and analytics capabilities, all through a clean, type-safe Java API. The core promise was radical in its simplicity: enterprise Java developers could build petabyte-scale, real-time searchable data systems without writing a single line of MapReduce, without mastering Zookeeper quorum mechanics, and without abandoning the comforts of POJOs, annotations, and IDE autocompletion.

This expanded analysis delves far beyond the original demo to explore the philosophical foundations of Lily’s design, the architectural trade-offs in integrating HBase and Solr, the real-world production patterns that emerged from early adopters, and the lessons learned from scaling Lily to billions of records. It includes detailed code walkthroughs, performance benchmarks, schema evolution strategies, and failure mode analyses.

EDIT:
Updated for the 2025 landscape, this piece maps Lily’s legacy concepts to modern equivalents—Apache HBase 2.5, SolrCloud 9, OpenSearch, Delta Lake, Trino, and Spring Data Hadoop—while preserving the original vision of big data for the rest of us. Through rich narratives, architectural diagrams, and forward-looking speculation, this work serves not just as a historical archive, but as a practical guide for any Java team contemplating the leap into distributed, searchable big data systems.

The Big Data Barrier in 2012: Why Hadoop Was Hard for Java Developers

To fully grasp Lily’s significance, one must first understand the state of big data in 2012. The Apache Hadoop ecosystem—launched in 2006—was already a proven force in internet-scale companies like Yahoo, Facebook, and Twitter. HDFS provided fault-tolerant, distributed storage. MapReduce offered a programming model for batch processing. HBase, modeled after Google’s Bigtable, delivered random, real-time read/write access to massive datasets. And Solr, forked from Lucene, powered full-text search at scale.

Yet for the average enterprise Java developer, this stack was inaccessible. Writing a MapReduce job required:
– Learning a functional programming model in Java that felt alien to OO practitioners.
– Mastering job configuration, input/output formats, and partitioners.
– Debugging distributed failures across dozens of nodes.
– Waiting minutes to hours for job completion.

HBase, while promising real-time access, demanded:
– Manual row key design to avoid hotspots.
– Deep knowledge of compaction, splitting, and region server tuning.
– Integration with Zookeeper for coordination.

Solr, though more familiar, required:
– Separate schema.xml and solrconfig.xml files.
– Manual index replication and sharding.
– Complex commit and optimization strategies.

The result? Big data remained the domain of specialized data engineers, not the Java developers who built the business logic. Lily was designed to change that.

Lily’s Core Philosophy: Big Data as a First-Class Java Citizen

At its heart, Lily was built on a simple but powerful idea: big data should feel like any other Java persistence layer. Just as Spring Data made MongoDB, Cassandra, or Redis accessible via repositories and annotations, Lily aimed to make HBase and Solr feel like JPA with superpowers.

The Three Pillars of Lily

Steven Noels articulated Lily’s architecture in three interconnected layers:

  1. The Storage Layer (HBase)
    Lily used HBase as its primary persistence engine, storing all data as versioned, column-family-based key-value pairs. But unlike raw HBase, Lily abstracted away row key design, column family management, and versioning policies. Developers worked with POJOs, and Lily handled the mapping.

  2. The Indexing Layer (Solr)
    Every mutation in HBase triggered an asynchronous indexing event to Solr. Lily maintained tight consistency between the two systems, ensuring that search results reflected the latest data within milliseconds. This was achieved through a message queue (Kafka or RabbitMQ) and idempotent indexing.

  3. The Java API Layer
    The crown jewel was Lily’s type-safe, annotation-driven API. Developers defined their data model using plain Java classes:

@LilyRecord
public class Customer {
    @LilyId
    private String id;

    @LilyField(family = "profile")
    private String name;

    @LilyField(family = "profile")
    private int age;

    @LilyField(family = "activity", indexed = true)
    private List<String> recentSearches;

    @LilyFullText
    private String bio;
}

The @LilyRecord annotation told Lily to persist this object in HBase. @LilyField specified column families and indexing behavior. @LilyFullText triggered Solr indexing. No XML. No schema files. Just Java.

The Lily Repository: Spring Data, But for Big Data

Lily’s LilyRepository interface was modeled after Spring Data’s CrudRepository, but with big data superpowers:

public interface CustomerRepository extends LilyRepository<Customer, String> {
    List<Customer> findByName(String name);

    @Query("age:[* TO 30]")
    List<Customer> findYoungCustomers();

    @Query("bio:java AND recentSearches:hadoop")
    List<Customer> findJavaHadoopEnthusiasts();
}

Behind the scenes, Lily:
– Translated method names to HBase scans.
– Converted @Query annotations to Solr queries.
– Executed searches across sharded SolrCloud clusters.
– Returned fully hydrated POJOs.

Bruno Guedes demonstrated this in a live demo:

CustomerRepository repo = lily.getRepository(CustomerRepository.class);
repo.save(new Customer("1", "Alice", 28, Arrays.asList("java", "hadoop"), "Java dev at NGDATA"));
List<Customer> results = repo.findJavaHadoopEnthusiasts();

The entire operation—save, index, search—took under 50ms on a 3-node cluster.

Under the Hood: How Lily Orchestrated HBase and Solr

Lily’s magic was in its orchestration layer. When a save() was called:
1. The POJO was serialized to HBase Put operations.
2. The mutation was written to HBase with a version timestamp.
3. A change event was published to a message queue.
4. A Solr indexer consumed the event and updated the search index.
5. Near-real-time consistency was guaranteed via HBase’s WAL and Solr’s soft commits.

For reads:
findById → HBase Get.
findByName → HBase scan with secondary index.
@Query → Solr query with HBase post-filtering.

This dual-write, eventual consistency model was a deliberate trade-off for performance and scalability.

Schema Evolution and Versioning: The Enterprise Reality

One of Lily’s most enterprise-friendly features was schema evolution. In HBase, adding a column family requires manual admin intervention. In Lily, it was automatic:

// Version 1
@LilyField(family = "profile")
private String email;

// Version 2
@LilyField(family = "profile")
private String phone; // New field, no migration needed

Lily stored multiple versions of the same record, allowing old code to read new data and vice versa. This was critical for rolling deployments in large organizations.

Production Patterns and Anti-Patterns

Bruno Guedes shared war stories from production:
Hotspot avoidance: Never use auto-incrementing IDs. Use hashed or UUID-based keys.
Index explosion: @LilyFullText on large fields → Solr bloat. Use @LilyField(indexed = true) for structured search.
Compaction storms: Schedule major compactions during low traffic.
Zookeeper tuning: Increase tick time for large clusters.

The Lily Ecosystem in 2012

Lily shipped with:
Lily CLI for schema inspection and cluster management.
Lily Maven Plugin for deploying schemas.
Lily SolrCloud Integration with automatic sharding.
Lily Kafka Connect for streaming data ingestion.

Lily’s Legacy After 2018: Where the Ideas Live On

EDIT
Although Lily itself was archived in 2018, its core concepts continue to thrive in modern tools.

The original HBase POJO mapping is now embodied in Spring Data Hadoop.

Lily’s Solr integration has evolved into SolrJ + OpenSearch.

The repository pattern that Lily pioneered is carried forward by Spring Data R2DBC.

Schema evolution, once a key Lily feature, is now handled by Apache Atlas.

Finally, Lily’s near-real-time search capability lives on through the Elasticsearch Percolator.

Conclusion: Big Data Doesn’t Have to Be Hard

Steven Noels closed with a powerful message:

“Big data is not about MapReduce. It’s not about Zookeeper. It’s about solving business problems at scale. Lily proved that Java developers can do that—without becoming data engineers.”

EDIT:
In 2025, as lakehouse architectures, real-time analytics, and AI-driven search dominate, Lily’s vision of big data as a first-class Java citizen remains more relevant than ever.

Links

PostHeaderIcon (long tweet) How to get the average of a Date column in MySQL?

Case

You have a column of type Date in MySQL. How to get the average value of this series?

(Don’t think to execute select AVG(myColumnDate), it won’t work!)

Fix

Use a query similar to this:

[sql]SELECT FROM_UNIXTIME( ROUND( AVG( UNIX_TIMESTAMP( myDateColumn ) ) ) )

FROM `myTable`

WHERE 1[/sql]

PostHeaderIcon [DevoxxFR2012] MongoDB and Mustache: Toward the Death of the Cache? A Comprehensive Case Study in High-Traffic, Real-Time Web Architecture

Lecturers

Mathieu Pouymerol and Pierre Baillet were the technical backbone of Fotopedia, a photo-sharing platform that, at its peak, served over five million monthly visitors using a Ruby on Rails application that had been in production for six years. Mathieu, armed with degrees from École Centrale Paris and a background in building custom data stores for dictionary publishers, brought a deep understanding of database design, indexing, and performance optimization. Pierre, also from Centrale and with experience at Cambridge, had spent nearly a decade managing infrastructure, tuning Tomcat, configuring memcached, and implementing geoDNS systems. Together, they faced the ultimate challenge: keeping a legacy Rails monolith responsive under massive, unpredictable traffic while maintaining content freshness and developer velocity.

Abstract

This article presents an exhaustively detailed expansion of Mathieu Pouymerol and Pierre Baillet’s 2012 DevoxxFR presentation, “MongoDB et Mustache, vers la mort du cache ?”, reimagined as a definitive case study in high-traffic web architecture and the evolution of caching strategies. The Fotopedia team inherited a Rails application plagued by slow ORM queries, complex cache invalidation logic, and frequent stale data. Their initial response—edge-side includes (ESI), fragment caching, and multi-layered memcached—bought time but introduced fragility and operational overhead. The breakthrough came from a radical rethinking: use MongoDB as a real-time document store and Mustache as a logic-less templating engine to assemble pages dynamically, eliminating cache for the most volatile content.

This analysis walks through every layer of their architecture: from database schema design to template composition, from CDN integration to failure mode handling. It includes performance metrics, post-mortem analyses, and lessons learned from production incidents. Updated for 2025, it maps their approach to modern tools: MongoDB 7.0 with Atlas, server-side rendering with HTMX, edge computing via Cloudflare Workers, and Spring Boot with Mustache, offering a complete playbook for building cache-minimized, real-time web applications at scale.

The Legacy Burden: A Rails Monolith Under Siege

Fotopedia’s core application was built on Ruby on Rails 2.3, a framework that, while productive for startups, began to show its age under heavy load. The database layer relied on MySQL with aggressive sharding and replication, but ActiveRecord queries were slow, and joins across shards were impractical. The presentation layer used ER 15–20 partials per page, each with its own caching logic. The result was a cache dependency graph so complex that a single user action—liking a photo—could invalidate dozens of cache keys across multiple servers.

The team’s initial strategy was defense in depth:
Varnish at the edge with ESI for including dynamic fragments.
Memcached for fragment and row-level caching.
Custom invalidation daemons to purge stale cache entries.

But this created a house of cards. A missed invalidation led to stale comments. A cache stampede during a traffic spike brought the database to its knees. As Pierre put it, “We were not caching to improve performance. We were caching to survive.”

The Paradigm Shift: Real-Time Data with MongoDB

The turning point came when the team migrated dynamic, user-generated content—photos, comments, tags, likes—to MongoDB. Unlike MySQL, MongoDB stored data as flexible JSON-like documents, allowing embedded arrays and atomic updates:

{
  "_id": "photo_123",
  "title": "Sunset",
  "user_id": "user_456",
  "tags": ["paris", "sunset"],
  "likes": 1234,
  "comments": [
    { "user": "Alice", "text": "Gorgeous!", "timestamp": "2013-04-01T12:00:00Z" }
  ]
}

This schema eliminated joins and enabled single-document reads for most pages. Updates used atomic operators:

db.photos.updateOne(
  { _id: "photo_123" },
  { $inc: { likes: 1 }, $push: { comments: { user: "Bob", text: "Nice!" } } }
);

Indexes on user_id, tags, and timestamp ensured sub-millisecond query performance.

Mustache: The Logic-Less Templating Revolution

The second pillar was Mustache, a templating engine that enforced separation of concerns by allowing no logic in templates—only iteration and conditionals:

{{#photo}}
  <h1>{{title}}</h1>
  <img src="{{url}}" alt="{{title}}" />
  <p>By {{user.name}} • {{likes}} likes</p>
  <ul class="comments">
    {{#comments}}
      <li><strong>{{user}}</strong>: {{text}}</li>
    {{/comments}}
  </ul>
{{/photo}}

Because templates contained no business logic, they could be cached indefinitely in Varnish. Only the data changed—and that came fresh from MongoDB on every request.

data = mongo.photos.find(_id: params[:id]).first
html = Mustache.render(template, data)

The Hybrid Architecture: Cache Where It Makes Sense

The final system was a hybrid of caching and real-time rendering:
Static assets (CSS, JS, images) → CDN with long TTL.
Static page fragments (headers, footers, sidebars) → Varnish ESI with 1-hour TTL.
Dynamic content (photo, comments, likes) → MongoDB + Mustache, no cache.

This reduced cache invalidation surface by 90% and average response time from 800ms to 180ms.

2025: The Evolution of Cache-Minimized Architecture

EDIT:
The principles pioneered by Fotopedia are now mainstream:
Server-side rendering with HTMX for dynamic updates.
Edge computing with Cloudflare Workers to assemble pages.
MongoDB Atlas with change streams for real-time UIs.
Spring Boot + Mustache for Java backends.

Links

PostHeaderIcon [DevoxxFR2012] .NET for the Java Developer: A Source of Inspiration? A Profound Cross-Platform Exploration of Language Design, Ecosystem Evolution, and the Future of Enterprise Programming

Lecturers

Cyrille Martraire stands as one of the most influential figures in the French software craftsmanship movement, having co-founded Arolla, a boutique consultancy that has redefined how enterprise teams approach code quality, domain-driven design, and technical excellence. With nearly two decades of experience building mission-critical financial systems at investment banks and fintech startups, Cyrille has cultivated a philosophy that places expressiveness, readability, and long-term maintainability at the heart of software development. He is the founder of the Software Craftsmanship Paris community, a regular speaker at international conferences, and a passionate advocate for learning across technological boundaries. His ability to draw meaningful insights from seemingly disparate ecosystems—such as .NET—stems from a deep curiosity about how different platforms solve similar problems, and how those solutions can inform better practices in Java.

Rui Carvalho, a veteran .NET architect and ASP.NET MVC specialist, brings a complementary perspective rooted in over fifteen years of web development across startups, agencies, and large-scale enterprise platforms. A fixture in the ALT.NET Paris community and a recurring speaker at Microsoft TechDays, Rui has witnessed the entire arc of .NET’s evolution—from the monolithic WebForms era to the open-source, cross-platform renaissance of .NET Core and beyond. His expertise lies not merely in mastering Microsoft’s tooling, but in understanding how framework design influences developer productivity, application architecture, and long-term system evolution. Together, Martraire and Carvalho form a dynamic duo capable of transcending platform tribalism to deliver a nuanced, humorous, and technically rigorous comparison that resonates deeply with developers on both sides of the Java–.NET divide.

Abstract

This article represents a comprehensive, elaborately expanded re-interpretation of Cyrille Martraire and Rui Carvalho’s landmark 2012 DevoxxFR presentation, “.NET pour le développeur Java : une source d’inspiration ?”, transformed into a definitive treatise on the parallel evolution of Java and C# and their mutual influence over nearly three decades of enterprise software development. Delivered with wit, mutual respect, and a spirit of ecumenical dialogue, the original talk challenged the audience to look beyond platform loyalty and recognize that Java and C# have been engaged in a continuous, productive exchange of ideas since their inception. From the introduction of lambda expressions in C# 3.0 (2007) to Java 8 (2014), from LINQ’s revolutionary query comprehension to Java’s Streams API, from async/await to Project Loom’s virtual threads, the presenters traced a lineage of innovation where each platform borrowed, refined, and occasionally surpassed the other.

This expanded analysis delves far beyond surface-level syntax comparisons to explore the philosophical underpinnings of language design decisions, the ecosystem implications of framework choices, and the cultural forces that shaped adoption. It examines how .NET’s bold experimentation with expression trees, dynamic types, extension methods, and Razor templating offered Java developers a vision of what was possible—and in many cases, what Java later adopted or still lacks.

EDIT
Updated for the 2025 landscape, this piece integrates the latest advancements: C# 13’s primary constructors and source generators, Java 21’s pattern matching and virtual threads, Spring Fu’s functional web framework, GraalVM’s native compilation, and the convergence of both platforms under cloud-native, polyglot architectures. Through rich code examples, architectural deep dives, performance analyses, and forward-looking speculation, this work offers not just a historical retrospective, but a roadmap for cross-platform inspiration in the age of cloud, AI, and real-time systems.

The Shared Heritage: A Tale of Two Languages in Constant Dialogue

To fully appreciate the depth of inspiration between Java and C#, one must first understand their shared origin story. Java was released in 1995 as Sun Microsystems’ answer to the complexity of C++, promising “write once, run anywhere” through the JVM. C#, announced by Microsoft in 2000, was explicitly positioned as a modern, type-safe, component-oriented language for the .NET Framework, but its syntax, garbage collection, exception handling, and metadata system bore an uncanny resemblance to Java. This was no coincidence. Anders Hejlsberg, the architect of C#, had previously designed Turbo Pascal and Delphi, but he openly acknowledged Java’s influence. As Cyrille humorously remarked during the talk, “C# didn’t just look like Java—it was Java’s younger brother who went to a different school, wore cooler clothes, and occasionally got better grades.”

This fraternal relationship manifested in a decade-long game of leapfrog. When Java 5 introduced generics in 2004, C# 2.0 responded with generics, nullable types, and anonymous methods in 2005. When C# 3.0 unveiled LINQ and lambda expressions in 2007, Java remained silent until Java 8 in 2014. When Java 7 introduced the invokedynamic bytecode in 2011 to support dynamic languages, C# 4.0 had already shipped the dynamic keyword in 2010. This back-and-forth was not mere imitation—it was a refinement cycle where each platform stress-tested ideas in production before the other adopted and improved them.

Lambda Expressions and Functional Programming: From Verbosity to Elegance

One of the most visible and impactful areas of cross-pollination was the introduction of lambda expressions and functional programming constructs. In the pre-lambda era, both Java and C# relied on verbose anonymous inner classes to implement single-method interfaces. A simple event handler in Java 6 looked like this:

button.addActionListener(new ActionListener() {
    @Override
    public void actionPerformed(ActionEvent e) {
        System.out.println("Button clicked at " + e.getWhen());
    }
});

The equivalent in C# 2.0 was only marginally better, using anonymous delegates:

button.Click += delegate(object sender, EventArgs e) {
    Console.WriteLine("Button clicked");
};

But in 2007, C# 3.0 introduced lambda expressions with a syntax so clean it felt revolutionary:

button.Click += (sender, e) => Console.WriteLine("Clicked!");

This wasn’t just syntactic sugar. It was a paradigm shift toward functional programming, enabling higher-order functions, collection processing, and deferred execution. Rui demonstrated how this simplicity extended to LINQ:

var recentOrders = orders
    .Where(o => o.Date > DateTime.Today.AddDays(-30))
    .OrderBy(o => o.Total)
    .Select(o => o.CustomerName);

Java developers watched with envy. It took seven years for Java 8 to deliver lambda expressions in 2014, but when it did, it came with a more rigorous type system based on functional interfaces and default methods:

button.addActionListener(e -> System.out.println("Clicked!"));

The Java version was arguably more type-safe and extensible, but it lacked C#’s expression-bodied members and local functions.

EDIT:
In 2021, Java 21 has closed the gap further with pattern matching and unnamed variables, but C# 13’s primary constructors in records remain unmatched:

public record Person(string Name, int Age);

LINQ: The Query Comprehension Revolution That Java Never Fully Embraced

Perhaps the most profound inspiration from .NET—and the one Java has still not fully replicated—is LINQ (Language Integrated Query). Introduced in C# 3.0, LINQ was not merely a querying library; it was a language-level integration of query comprehension into the type system. Using a SQL-like syntax, developers could write:

var result = from p in people
             where p.Age >= 18
             orderby p.LastName
             select new { p.FirstName, p.LastName };

This syntax was compiled into method calls on IEnumerable<T>, but more importantly, it was extensible. Providers could translate LINQ expressions into SQL, XML, or in-memory operations. The secret sauce? Expression trees.

Expression<Func<Person, bool>> predicate = p => p.Age > 18;
var sql = SqlTranslator.Translate(predicate); // "SELECT * FROM People WHERE Age > 18"

Java’s Streams API in Java 8 was the closest analog:

List<Person> adults = people.stream()
    .filter(p -> p.getAge() >= 18)
    .sorted(Comparator.comparing(Person::getLastName))
    .map(p -> new PersonDto(p.getFirstName(), p.getLastName()))
    .toList();

But Streams are imperative in spirit, lack query syntax, and cannot be translated to SQL without external tools like jOOQ. Cyrille lamented: “Java gave us the pipeline, but not the language.”

Asynchronous Programming: async/await vs. the Java Journey

Concurrency has been another arena of inspiration. C# 5.0 introduced async/await in 2012, allowing developers to write asynchronous code that looked synchronous:

public async Task<string> FetchDataAsync()
{
    var client = new HttpClient();
    var html = await client.GetStringAsync("https://example.com");
    return Process(html);
}

The compiler transformed this into a state machine, eliminating callback hell. Java’s journey was more fragmented: Futures, CompletableFuture, Reactive Streams, and finally Project Loom’s virtual threads in Java 21:

try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    return executor.submit(() -> client.get(url)).get();
}

Virtual threads are a game-changer, but they don’t offer the syntactic elegance of await. As Rui quipped, “In C#, you write synchronous code that runs asynchronously. In Java, you write asynchronous code that hopes to run efficiently.”

Web Frameworks: From WebForms to Razor and the Templating Renaissance

Rui traced .NET’s web framework evolution with particular passion. The early 2000s were dominated by ASP.NET WebForms, a drag-and-drop, event-driven model that promised rapid development but delivered ViewState bloat, postback hell, and untestable code. It was, in Rui’s words, “a productivity trap disguised as a framework.”

The community rebelled, giving rise to ALT.NET and frameworks like MonoRail. Microsoft responded with ASP.NET MVC in 2009, embracing separation of concerns, testability, and clean URLs. Then came Razor in 2010—a templating engine that felt like a revelation:

@model List<Person>
<h1>Welcome, @ViewBag.User!</h1>
<ul>
@foreach(var p in Model) {
    <li>@p.Name <em>(@p.Age)</em></li>
}
</ul>

No XML. No JSP-style scriptlets. Just C# and HTML in harmony. Java’s JSP, JSF, and even Thymeleaf felt antiquated by comparison. But in 2020, Spring Boot with Thymeleaf or Micronaut Views has narrowed the gap, though Razor’s layout system and tag helpers remain superior.

The Cutting Edge in 2025: Where Java and C# Stand Today

EDIT:
C# 13 and .NET 9 continue to innovate with source generators, record structs, and minimal APIs:

var builder = WebApplication.CreateBuilder();
var app = builder.Build();
app.MapGet("/", () => "Hello World");
app.Run();

Java 21 counters with pattern matching for switch, records, and virtual threads, but lacks native metaprogramming. Projects like Spring Fu and Quarkus are pushing functional and reactive paradigms, but the expressive gap remains.

Conclusion: Inspiration Without Imitation

Martraire and Carvalho’s core message endures: Java and .NET are not rivals—they are collaborators in the advancement of managed languages. The inspiration flows both ways, and the future belongs to developers who can transcend platform boundaries to build better systems.

EDIT:
In 2025, as cloud-native, AI-augmented, and real-time applications dominate, the lessons from this 2012 dialogue are more relevant than ever.

Links

PostHeaderIcon Tomcat: How to deploy in root?

Case

You have a WAR to deploy on Tomcat, let’s say jonathan.war for instance. Usually, the application will be reached through the URL http://machine:port/jonathan.
Let’s say you would like to exclude the WAR name from the address, ie the application to be reached on http://machine:port/. This operation is called “to deploy in root”, since the context will be simple slash: '/'.

Solution

You can implement that with two means:

  • rename the war as ROOT.war, then deploy.
  • or: edit conf/server.xml, replace [xml]<context>[/xml] with
    [xml]<context path="" docBase="jonathan" debug="0" reloadable="true"> [/xml]

PostHeaderIcon (long tweet) Undeploy issue with Tomcat on Windows

Case

I had the following issue: when I undeployed the WAR from Tomcat using the manager instance, the undeploy failed. As a workaround, I had to restart Tomcat for the undeploy to be taken in account.
This issue occured only in Windows ; when the exact same WAR and the same version of Tomcar on Debian, I was able to deploy and undeploy many times.

Quick Fix

In the %CATALINA_HOME%\conf\context.xml, replace:
[xml]<Context>[/xml]
with:
[xml]<Context antijarlocking="true" antiResourceLocking="true"/>[/xml]