Archive for the ‘General’ Category
[DevoxxFR2013] Building a Complete Information System in 10 Months with Cloud: The Joe Mobile Story
Lecturers
Didier Herbault, the Chief Technical Officer of Joe Mobile, a disruptive Mobile Virtual Network Operator (MVNO) launched by SFR, brings a wealth of experience from over a decade in the telecommunications industry. With a background in web-scale architectures and a passion for applying internet-era principles to traditional telecom systems, Didier orchestrated the creation of Joe Mobile’s entire information system from scratch in an astonishing ten-month timeframe. His philosophy—“safe is risky and risky is safe”—reflects a willingness to embrace calculated risks and innovate rapidly in a highly regulated and competitive market. Under his leadership, Joe Mobile achieved a fully cloud-native stack, eliminating on-premises servers and leveraging Software-as-a-Service (SaaS) solutions for every business function, from CRM to billing.
Cyril Leclerc, at the time of the project, was Technical Director at Xebia, where he served as the lead architect and infrastructure engineer for Joe Mobile’s cloud implementation. A veteran of Java middleware and open-source monitoring, Cyril is a core committer on JMXTrans and a former contributor to Tomcat 4. His expertise in cloud automation, monitoring, and DevOps practices was instrumental in building a scalable, observable, and cost-effective platform. Since this presentation, Cyril has joined CloudBees, where he continues to advance continuous delivery and cloud-native development practices.
Abstract
In the spring of 2012, Joe Mobile set an audacious goal: to launch a fully functional MVNO in France within ten months, competing with established giants like Orange, SFR, and Bouygues Telecom. This presentation details the comprehensive, end-to-end journey of building a complete information system—spanning customer acquisition, billing, CRM, network provisioning, and analytics—entirely on cloud infrastructure, without a single physical server in the company’s offices. Didier Herbault and Cyril Leclerc provide an exhaustive walkthrough of the architectural decisions, technology selections, cultural transformations, and financial strategies that enabled this feat. From selecting Amazon Web Services (AWS) for compute and storage to integrating SaaS solutions like Lithium for community management and SurveyMonkey for customer feedback, every component was chosen for speed, scalability, and operational simplicity. The session delves into the challenges of telecom integration, real-time billing, and regulatory compliance, offering a complete case study in cloud-native disruption. Entrepreneurs, architects, and DevOps practitioners gain a detailed blueprint for launching a technology-driven business at startup speed within a legacy industry.
The MVNO Challenge: Disrupting Telecom with Cloud-Native Agility
The telecommunications industry has traditionally been defined by massive capital expenditures, long procurement cycles, and rigid, on-premises systems. Launching an MVNO— a mobile operator that leases network capacity from a host operator (in this case, SFR)—requires integrating with legacy OSS/BSS (Operations Support Systems/Business Support Systems), provisioning SIM cards, managing real-time rating and billing, and providing customer self-service portals. For a new entrant like Joe Mobile, these requirements posed a formidable barrier: building a traditional system would take years and tens of millions of euros.
Didier Herbault’s vision was to apply web-scale principles to telecom: treat infrastructure as code, leverage SaaS for non-differentiating functions, and build only what provides competitive advantage. The goal was to launch with a minimal viable product in ten months, using cloud to eliminate hardware lead times and SaaS to avoid building commodity systems. Cyril Leclerc was tasked with designing an architecture that could scale from zero to hundreds of thousands of subscribers while maintaining sub-second response times for critical operations like balance checks and plan changes.
Architecture Overview: A Fully Cloud-Native Stack
The Joe Mobile platform was built entirely on AWS, with EC2 for compute, RDS for relational data, ElastiCache for caching, S3 for storage, and CloudFront for content delivery. The core application was a Java/Spring Boot monolith that evolved into microservices as the codebase grew. RabbitMQ handled asynchronous messaging between services, while Hazelcast provided distributed caching and session replication. The frontend was a single-page application built with Backbone.js and RequireJS, served through CloudFront for low-latency delivery.
The billing system was a custom real-time rating engine that processed CDR (Call Detail Records) from SFR’s network, applied plan rules, and updated subscriber balances in PostgreSQL. Amazon SNS/SQS orchestrated event-driven workflows, such as sending SMS notifications for low balance or plan expiration. Elasticsearch powered search and analytics, with Kibana dashboards for operational visibility. CloudWatch collected infrastructure metrics, while JMXTrans exported application metrics to Graphite for graphing and alerting.
SaaS Integration: Leveraging Best-of-Breed Solutions
Joe Mobile adopted a “buy before build” philosophy, integrating SaaS solutions for every non-core function:
- Lithium for community forums and customer support
- Zendesk for ticketing and knowledge base
- SurveyMonkey (now Momentive) for customer satisfaction surveys
- Mailchimp for email marketing
- Google Workspace for collaboration
- Xero for accounting
- Stripe for payment processing
This approach eliminated the need to develop and maintain complex systems for HR, finance, marketing, and support, allowing the engineering team to focus on the core telecom platform. Integration was achieved through REST APIs and webhooks, with Zapier used for rapid prototyping of automation workflows.
The 10-Month Timeline: From Idea to Launch
The project was executed in four phases:
-
Months 1–2: Foundations
Cyril Leclerc provisioned the initial AWS environment, set up CI/CD with Jenkins, and implemented infrastructure as code with CloudFormation. The team adopted Git for version control and JIRA for issue tracking. Core services—customer portal, billing engine, and CRM—were scaffolded. -
Months 3–5: Core Development
The billing engine was built to handle real-time rating for voice, SMS, and data. The customer portal was developed with Backbone.js, allowing users to manage plans, view usage, and top up balances. Integration with SFR’s provisioning systems was achieved via SOAP APIs wrapped in Apache Camel routes. -
Months 6–8: Scaling and Hardening
Load testing with Gatling validated performance under 100,000 concurrent users. Auto-scaling groups were configured to handle traffic spikes. Chaos Monkey (inspired by Netflix) was used to test resilience. Security was hardened with AWS WAF, Shield, and IAM roles. -
Months 9–10: Launch and Stabilization
Beta testing with 1,000 users identified edge cases. The marketing site went live, and the first SIM cards were shipped. Post-launch, the team operated in war room mode, resolving issues in real time.
Cultural Transformation: From Telecom to Web Speed
The most profound change was cultural. The team adopted agile practices with two-week sprints, daily standups, and retrospectives. Pair programming and code reviews ensured quality. Feature flags allowed safe rollout of new capabilities. The absence of physical servers eliminated traditional ops tasks, freeing engineers to focus on code and automation.
Didier Herbault’s credit card became a symbol of the new culture: in the first two months, he personally funded AWS, Papertrail, and other services, bypassing bureaucratic procurement processes. This “just do it” mindset permeated the organization, from engineering to marketing.
Financial Model: OpEx over CapEx
The cloud-first approach transformed the cost structure. Instead of €10 million in upfront hardware, Joe Mobile spent €100,000 in the first year on AWS and SaaS. Reserved instances reduced EC2 costs by 40%, while spot instances handled batch processing at 90% discount. The total cost of ownership was 70% lower than a traditional setup, with the added benefit of infinite scalability.
Outcomes and Lessons Learned
Joe Mobile launched on time, with 99.99% uptime and sub-second response times for critical operations. The platform scaled to 500,000 subscribers within 18 months. Key lessons:
- Cloud eliminates barriers to entry in capital-intensive industries
- SaaS accelerates time-to-market for non-differentiating functions
- Culture eats strategy for breakfast—agile, empowered teams are essential
- Start small, iterate fast—MVP first, perfection later
Conclusion: Cloud as a Disruptive Force
Joe Mobile’s ten-month journey from concept to launch demonstrates that cloud, when combined with SaaS and agile practices, can disrupt even the most entrenched industries. Didier and Cyril’s comprehensive approach—technical, cultural, and financial—offers a complete playbook for entrepreneurs seeking to build world-class systems at startup speed.
Links
Hashtags: #CloudNative #MVNO #SaaS #AWS #DevOps #TelecomDisruption #DidierHerbault #CyrilLeclerc
[DevoxxBE2013] Part 1: The Modern Java Web Developer
Matt Raible, a veteran web developer and founder of AppFuse, presents a comprehensive bootcamp on modern Java web development, blending frontend and backend expertise. With experience at LinkedIn and Oracle, Matt navigates the evolving landscape of HTML5, JavaScript, AngularJS, and Java, equipping developers with essential skills. His session covers frameworks like Bootstrap, tools like GitHub, and cloud platforms like Heroku, demonstrating a real-world AngularJS project with practical fixes for routing and scripting issues.
The modern Java web developer, Matt argues, thrives by mastering both client-side JavaScript and server-side Java, delivering responsive, scalable applications. His hands-on approach, drawn from client projects, showcases techniques to enhance UI and backend performance.
Frontend Mastery with AngularJS and Bootstrap
Matt dives into AngularJS, demonstrating a single-page application with ng-route for navigation. He resolves a routing issue caused by special characters, escaping URLs in app.js to ensure seamless page transitions.
Bootstrap, Matt adds, enhances UI consistency, applying responsive grids to mimic desktop app aesthetics, crucial for professional web interfaces.
Backend Integration with Java and REST
On the server side, Matt leverages Java with Jackson for JSON serialization, building RESTful APIs. He integrates these with AngularJS, ensuring smooth data flow. A demo shows a password change form, highlighting the need to relocate JavaScript to the main page for partial views.
This synergy, Matt emphasizes, unifies frontend dynamism with backend reliability.
Performance Optimization and Caching
Matt explores performance, using Wro4j for asset minification and caching to reduce load times. He demonstrates configuring page speed optimizations, ensuring fast client-side rendering.
Cloud platforms like Heroku, Matt notes, simplify deployment, scaling applications effortlessly for real-world demands.
Security and Load Testing Strategies
Security is paramount, Matt stresses, advocating input sanitization and secure REST endpoints. He introduces load testing with tools to simulate user traffic, identifying bottlenecks.
These practices, drawn from his LinkedIn experience, ensure robust, secure applications under high load.
Practical Debugging and Framework Pitfalls
Matt shares debugging insights from a client project, addressing AngularJS partials failing to execute inline scripts. By moving JavaScript to the main page and using Angular’s on API, he restores functionality.
Such real-world fixes, Matt argues, highlight the importance of understanding framework nuances for reliable development.
Links:
[DevoxxBE2012] Real World Java EE
Adam Bien, a renowned Java consultant, author, and Java Champion with decades of experience in enterprise applications, delivered an impromptu yet insightful session on practical Java EE usage. Adam, known for books like “Real World Java EE Patterns,” shared real-world insights from his freelance projects, emphasizing simplicity and effectiveness over hype.
He started by recounting his journey since 1995, favoring Java EE for its robustness in production. Adam advocated thin WAR deployments, bundling everything into a single archive for easy management, contrasting with complex EAR structures.
Discussing boundaries, Adam promoted domain-driven design, where entities form the core, with services as facades. He cautioned against unnecessary abstractions, like excessive DAOs, favoring direct JPA usage.
Adam highlighted CDI’s power for dependency injection, reducing boilerplate. He demonstrated boundary-control-entity patterns, where boundaries handle transactions, controls manage logic, and entities persist data.
Deployment Strategies and Simplicity
Adam stressed deploying as WARs to containers like GlassFish, avoiding heavy setups. He shared experiences with microservices avant la lettre, using REST for inter-service communication.
He critiqued over-engineering, like misusing ESBs for simple integrations, preferring lightweight approaches.
Testing and Quality Assurance
Adam advocated comprehensive testing: unit with JUnit, integration with Arquillian, and UI with Selenium/Graphene. He demonstrated embedding containers for realistic tests.
He emphasized boundary testing, simulating real scenarios without mocks where possible.
Performance and Scalability
Discussing optimization, Adam noted Java EE’s built-in clustering, but advised measuring first. He shared cases where simple configurations sufficed, avoiding premature scaling.
Community and Best Practices
Adam encouraged open-source contributions, sharing his GitHub projects. He addressed common pitfalls, like session misuse, advocating stateless designs.
In Q&A, he discussed tools like Jenkins for CI and IDEs like IntelliJ.
Adam’s talk reinforced Java EE’s viability for real-world applications through pragmatic, lean practices.
Links:
[DevoxxFR2013] The Lightning Memory-Mapped Database: A Revolutionary Approach to High-Performance Key-Value Storage
Lecturer
Howard Chu stands as one of the most influential figures in open-source systems programming, with a career that spans decades of foundational contributions to the software ecosystem. As the founder and Chief Technology Officer of Symas Corporation, he has dedicated himself to building robust, high-performance solutions for enterprise and embedded environments. His involvement with the OpenLDAP Project began in 1999, and by 2007 he had assumed the role of Chief Architect, guiding the project through significant scalability improvements that power directory services for millions of users worldwide. Chu’s earlier work in the 1980s on GNU tools and his invention of parallel make — a feature that enables concurrent compilation of source files and is now ubiquitous in build systems like GNU Make — demonstrates his deep understanding of system-level optimization and concurrency. The creation of the Lightning Memory-Mapped Database (LMDB) emerged directly from the practical challenges faced in OpenLDAP, where existing storage backends like Berkeley DB introduced unacceptable overhead in data copying, deadlock-prone locking, and maintenance-intensive compaction. Chu’s design philosophy emphasizes simplicity, zero-copy operations, and alignment with modern hardware capabilities, resulting in a database library that not only outperforms its predecessors by orders of magnitude but also fits entirely within a typical CPU’s L1 cache at just 32KB of object code. His ongoing work continues to influence a broad range of projects, from authentication systems to mobile applications, cementing his legacy as a pioneer in efficient, reliable data management.
Abstract
The Lightning Memory-Mapped Database (LMDB) represents a paradigm shift in embedded key-value storage, engineered by Howard Chu to address the critical performance bottlenecks encountered in the OpenLDAP Project when using Berkeley DB. This presentation provides an exhaustive examination of LMDB’s design principles, architectural innovations, and operational characteristics, demonstrating why it deserves the moniker “lightning.” Chu begins with a detailed analysis of Berkeley DB’s shortcomings — including data copying between kernel and user space, lock-based concurrency leading to deadlocks, and periodic compaction requirements — and contrasts these with LMDB’s solutions: direct memory mapping via POSIX mmap() for zero-copy access, append-only writes for instantaneous crash recovery, and multi-version concurrency control (MVCC) for lock-free, linearly scaling reads. He presents comprehensive benchmarks showing read throughput scaling perfectly with CPU cores, write performance exceeding SQLite by a factor of twenty, and a library footprint so compact that it executes entirely within L1 cache. The session includes an in-depth API walkthrough, transactional semantics, support for sorted duplicates, and real-world integrations with OpenLDAP, Cyrus SASL, Heimdal Kerberos, SQLite, and OpenDKIM. Attendees gain a complete understanding of how LMDB achieves unprecedented efficiency, simplicity, and reliability, making it the ideal choice for performance-critical applications ranging from embedded devices to high-throughput enterprise systems.
The Imperative for Change: Berkeley DB’s Limitations in High-Performance Directory Services
The development of LMDB was not an academic exercise but a direct response to the real-world constraints imposed by Berkeley DB in the OpenLDAP environment. OpenLDAP, as a mission-critical directory service, demands sub-millisecond response times for millions of authentication and authorization queries daily. Berkeley DB, while robust, introduced several fundamental inefficiencies that became unacceptable under such loads.
The most significant issue was data copying overhead. Berkeley DB maintained its own page cache in user space, requiring data to be copied from kernel buffers to this cache and then again to the application buffer — a process that violated the zero-copy principle essential for minimizing latency in I/O-bound operations. This double-copy penalty became particularly egregious with modern solid-state drives and multi-core processors, where memory bandwidth is often the primary bottleneck.
Another critical flaw was lock-based concurrency. Berkeley DB’s fine-grained locking mechanism, while theoretically sound, frequently resulted in deadlocks under high contention, especially in multi-threaded LDAP servers handling concurrent modifications. The overhead of lock management and deadlock detection negated much of the benefit of parallel processing.
Finally, compaction and maintenance represented an operational burden. Berkeley DB required periodic compaction to reclaim space from deleted records, a process that could lock the database for minutes or hours in large installations, rendering the system unavailable during peak usage periods.
These limitations collectively threatened OpenLDAP’s ability to scale with growing enterprise demands, prompting Chu to design a completely new storage backend from first principles.
Architectural Foundations: Memory Mapping, Append-Only Writes, and Flattened B+Trees
LMDB’s architecture is built on three core innovations that work in concert to eliminate the aforementioned bottlenecks.
The first and most fundamental is direct memory mapping using POSIX mmap(). Rather than maintaining a separate cache, LMDB maps the entire database file directly into the process’s virtual address space. This allows data to be accessed via pointers with zero copying — the operating system’s virtual memory manager handles paging transparently. This approach leverages decades of OS optimization for memory management while eliminating the complexity and overhead of a user-space cache.
The second innovation is append-only write semantics. When a transaction modifies data, LMDB does not update pages in place. Instead, it appends new versions of modified pages to the end of the file and updates the root pointer atomically using msync(). This design to instantaneous crash recovery — in the event of a system failure, the previous root pointer remains valid, and no log replay or checkpoint recovery is required. The append-only model also naturally supports Multi-Version Concurrency Control (MVCC), where readers access a consistent snapshot of the database without acquiring locks, while writers operate on private copies of pages.
The third architectural choice is a flattened B+tree structure. Traditional B+trees maintain multiple levels of internal nodes, each requiring additional I/O to traverse. LMDB stores all nodes at the leaf level, with internal nodes containing only keys and child pointers. This reduces tree height and minimizes the number of page fetches required for lookup operations. Keys within pages are maintained in sorted order, enabling efficient range scans and supporting sorted duplicates for multi-valued attributes common in directory schemas.
API Design: Simplicity and Power in Harmony
Despite its sophisticated internals, LMDB’s API is remarkably concise and intuitive, reflecting Chu’s philosophy that complexity should be encapsulated, not exposed. The core operations fit within a handful of functions:
MDB_env *env;
mdb_env_create(&env);
mdb_env_set_mapsize(env, 10485760); // 10MB initial size
mdb_env_open(env, "./mydb", MDB_NOSUBDIR, 0664);
MDB_txn *txn;
mdb_txn_begin(env, NULL, 0, &txn);
MDB_dbi dbi;
mdb_dbi_open(txn, "mytable", MDB_CREATE, &dbi);
MDB_val key = {5, "hello"};
MDB_val data = {5, "world"};
mdb_put(txn, dbi, &key, &data, 0);
mdb_txn_commit(txn);
For databases supporting duplicate values:
mdb_dbi_open(txn, "multival", MDB_CREATE | MDB_DUPSORT, &dbi);
mdb_put(txn, dbi, &key, &data1, 0);
mdb_put(txn, dbi, &key, &data2, MDB_APPENDDUP); // Preserves sort order
The API supports full ACID transactions with nested transactions, cursor-based iteration, and range queries. Error handling is straightforward, with return codes indicating success or specific failure conditions.
Performance Characteristics: Linear Scaling and Unparalleled Efficiency
LMDB’s performance profile is nothing short of revolutionary, particularly in read-heavy workloads. Benchmarks conducted by Chu demonstrate:
- Read scaling: Perfect linear scaling with CPU cores, achieving over 1.5 million operations per second on an 8-core system. This is possible because readers never contend for locks and operate on consistent snapshots.
- Write performance: Approximately 100,000 operations per second, compared to Berkeley DB’s 5,000 and SQLite’s similar range — a 20x improvement.
- Memory efficiency: The shared memory mapping means multiple processes accessing the same database share physical RAM, dramatically reducing per-process memory footprint.
- Cache residency: At 32KB of object code, the entire library fits in L1 cache, eliminating instruction cache misses during operation.
These metrics translate directly to real-world gains. OpenLDAP with LMDB handles 10 times more queries per second than with Berkeley DB, while SQLite gains a 2x speedup when using LMDB as a backend.
Operational Excellence: Zero Maintenance and Instant Recovery
LMDB eliminates the operational overhead that plagues traditional databases. There is no compaction, vacuuming, or index rebuilding required. The database file grows only with actual data, and deleted records are reclaimed automatically during transaction commits. The map size, specified at environment creation, can be increased dynamically without restarting the application.
Crash recovery is instantaneous — the last committed root pointer is always valid, and no transaction log replay is needed. This makes LMDB ideal for embedded systems and mobile devices where reliability and quick startup are paramount.
Concurrency Model: One Writer, Unlimited Readers
LMDB enforces a strict concurrency model: one writer at a time, unlimited concurrent readers. This design choice, while seemingly restrictive, actually improves performance. Chu’s testing revealed that even with Berkeley DB’s multi-writer support, placing a global lock around the database increased write throughput. The overhead of managing multiple concurrent writers — deadlock detection, lock escalation, and cache invalidation — often outweighs the benefits.
For applications requiring multiple writers, separate LMDB environments can be used, or higher-level coordination (e.g., via a message queue) can serialize write access. An experimental patch exists to allow concurrent writes to multiple databases within a single environment, but the single-writer model remains the recommended approach for maximum performance.
Ecosystem Integrations: Powering Critical Infrastructure
LMDB’s versatility is evident in its adoption across diverse projects:
- OpenLDAP: The primary motivation, enabling directory servers to handle millions of entries with sub-millisecond latency
- Cyrus SASL: Efficient storage of authentication credentials
- Heimdal Kerberos: High-throughput ticket management in distributed authentication
- SQLite: As a backend, providing embedded SQL with LMDB’s speed and reliability
- OpenDKIM: Accelerating domain key lookups for email authentication
These integrations demonstrate LMDB’s ability to serve as a drop-in replacement for slower, more complex storage engines.
Future Directions: Replication and Distributed Systems
While LMDB focuses on local storage, Chu envisions its use as a high-performance backend for distributed NoSQL systems like Riak and HyperDex, which provide native replication. This separation of concerns allows LMDB to excel at what it does best — ultra-fast, reliable local access — while leveraging other systems for network coordination.
The library’s compact size and zero-dependency design make it particularly attractive for edge computing, IoT devices, and mobile applications, where resource constraints are severe.
Conclusion: Redefining the Possible in Database Design
The Lightning Memory-Mapped Database represents a triumph of focused engineering over feature bloat. By ruthlessly optimizing for the common case — read-heavy workloads with occasional writes — and leveraging modern OS capabilities like mmap(), Howard Chu created a storage engine that is simultaneously faster, simpler, and more reliable than its predecessors. LMDB proves that sometimes the most revolutionary advances come not from adding features, but from removing complexity. For any application where performance, reliability, and simplicity matter, LMDB is not just an option — it is the new standard.
Links
[DevoxxFR2013] Developers: Prima Donnas of the 21st Century? — A Provocative Reflection on Craft, Value, and Responsibility
Lecturer
Hadi Hariri stands at the intersection of technical depth and human insight as a developer, speaker, podcaster, and Technical Evangelist at JetBrains. For over a decade, he has traversed the global conference circuit, challenging audiences to confront uncomfortable truths about their profession. A published author and frequent contributor to developer publications, Hadi brings a rare blend of architectural expertise and communication clarity. Based in Spain with his wife and three sons, he leads the .NET Malaga User Group and holds prestigious titles including ASP.NET MVP and Insider. Yet beneath the credentials lies a relentless advocate for software as a human endeavor — not a technological one.
Abstract
This is not a technical talk. There will be no code, no frameworks, no live demos. Instead, Hadi Hariri delivers a searing, unfiltered indictment of the modern developer psyche. We proclaim ourselves misunderstood geniuses, central to business success yet perpetually underappreciated. We demand the latest tools, resent managerial oversight, and cloak personal ambition in the language of craftsmanship. But what if the real problem is not “them” — it’s us?
Through sharp wit, brutal honesty, and relentless logic, Hadi dismantembles the myths we tell ourselves: that communication is someone else’s job, that innovation resides in syntax, that our discomfort with business priorities justifies disengagement. This session is a mirror — polished, unforgiving, and essential. Leave your ego at the door, or stay seated and miss the point.
The Myth of the Misunderstood Genius
We gather in echo chambers — conferences, forums, internal chat channels — to commiserate about how management fails to grasp our brilliance. We lament that stakeholders cannot appreciate the elegance of our dependency injection, the foresight of our microservices, the purity of our functional paradigm. We position ourselves as the unsung heroes of the digital age, laboring in obscurity while others reap the rewards.
Yet when pressed, we retreat behind JIRA tickets, estimation buffers, and technical debt backlogs. We argue passionately about tabs versus spaces, spend days evaluating build tools, and rewrite perfectly functional systems because the new framework promises salvation. We have mistaken activity for impact, novelty for value, and personal preference for professional necessity.
Communication: The Silent Killer of Influence
The single greatest failure of the developer community is not technical — it is communicative. We speak in acronyms and abstractions: DI, IoC, CQRS, DDD. We present architecture diagrams as if they were self-evident. We say “it can’t be done” when we mean “I haven’t considered the trade-offs.” We fail to ask “why” because we assume the answer is beneath us.
Consider a simple feature request: “The user should be able to reset their password.” A typical response might be: “We’ll need a new microservice, a message queue, and a Redis cache for rate limiting.” The business hears cost, delay, and complexity. What they needed was: “We can implement this securely in two days using the existing authentication flow, with an optional enhancement for audit logging if compliance requires it.”
The difference is not technical sophistication — it is empathy, clarity, and alignment. Until we learn to speak the language of outcomes rather than implementations, we will remain marginalized.
The Silver Bullet Delusion
Every year brings a new savior: a framework that will eliminate boilerplate, a methodology that will banish chaos, a cloud service that will scale infinitely. We chase these mirages with religious fervor, abandoning yesterday’s solution before it has proven its worth. We rewrite backend systems in Node.js, then Go, then Rust — not because the business demanded it, but because we read a blog post.
This is not innovation. It is distraction. It is the technical equivalent of rearranging deck chairs on the Titanic. The problems that truly matter — unclear requirements, legacy constraints, human error, organizational inertia — are immune to syntax. No process can compensate for poor judgment, and no tool can replace clear thinking.
Value Over Vanity: Redefining Success
We measure ourselves by metrics that feel good but deliver nothing: lines of code written, test coverage percentages, build times in milliseconds. We celebrate the deployment of a new caching layer while users wait longer for search results. We optimize the developer experience at the expense of the user experience.
True value resides in outcomes: a feature that increases revenue, a bug fix that prevents customer churn, a performance improvement that saves server costs. These are not glamorous. They do not trend on Hacker News. But they are the reason our profession exists.
Ask yourself with every commit: Does this make someone’s life easier? Does it solve a real problem? If400 If the answer is no, you are not innovating — you are indulging.
The Privilege We Refuse to Acknowledge
Most professions are defined by repetition. The accountant reconciles ledgers. The lawyer drafts contracts. The mechanic replaces brakes. Day after day, the same patterns, the same outcomes, the same constraints.
We, by contrast, are paid to solve novel problems. We are challenged to learn continuously, to adapt to shifting requirements, to create systems that impact millions. We work in air-conditioned offices, collaborate with brilliant minds, and enjoy flexibility that others can only dream of. We are not underpaid or underappreciated — we are extraordinarily privileged.
And yet we complain. We demand ping-pong tables and unlimited vacation while nurses work double shifts, teachers buy school supplies out of pocket, and delivery drivers navigate traffic in the rain. Our discomfort is not oppression — it is entitlement.
Innovation as Human Impact
Innovation is not a technology. It is not a framework, a language, or a cloud provider. Innovation is the act of making someone’s life better. It is the medical system that detects cancer earlier. It is the banking app that prevents fraud. It is the e-commerce platform that helps a small business reach new customers.
Even in enterprise software — often derided as mundane — we have the power to reduce frustration, automate drudgery, and free human attention for higher purposes. Every line of code is an opportunity to serve.
A Call to Maturity
The prima donnas of the 21st century are not the executives demanding impossible deadlines. They are not the product managers changing requirements. They are us — the developers who believe our discomfort entitles us to disengagement, who confuse technical preference with professional obligation, who prioritize our learning over the user’s needs.
It is time to grow up. To communicate clearly. To focus on outcomes. To recognize our privilege and wield it responsibly. The world does not owe us appreciation — it owes us the opportunity to make a difference. Let us stop wasting it.
Links
[DevoxxBE2012] Home Automation for Geeks
Thomas Eichstädt-Engelen and Kai Kreuzer, both prominent figures in the open-source home automation scene, presented an engaging exploration of openHAB. Thomas, a senior consultant at innoQ with expertise in Eclipse technologies and OSGi, teamed up with Kai, a software architect at Deutsche Telekom specializing in IoT and smart homes, to demonstrate how openHAB transcends basic home control systems. Their session highlighted the project’s capabilities for geeks, running on affordable devices like the Raspberry Pi while offering advanced features such as presence simulation, sensor data visualization, and integration with calendars.
They began by challenging common perceptions of home automation, often limited to remote light switching or shutter control via smartphones. Kai and Thomas emphasized openHAB’s open-source ethos, allowing extensive customization beyond commercial offerings. The framework’s modular architecture, built on OSGi, enables easy extension to connect with diverse protocols and devices.
A live demo showcased openHAB’s runtime on embedded hardware, illustrating rule-based automation. For instance, they configured scenarios where motion sensors trigger lights or simulate occupancy during absences. Integration with Google Calendar for irrigation scheduling demonstrated practical, intelligent applications.
Thomas and Kai stressed the project’s appeal to Java and OSGi enthusiasts, featuring an Xbase-derived scripting language for defining complex logic. This allows developers to craft rules reacting to events like temperature changes or user inputs.
Core Concepts and Architecture
Kai outlined openHAB’s structure: a core runtime managing bindings to hardware protocols (e.g., Z-Wave, KNX), persistence services for data storage, and user interfaces. Bindings abstract device interactions, making the system protocol-agnostic. Persistence handles logging sensor data to databases like MySQL or InfluxDB for historical analysis.
Thomas highlighted the OSGi foundation, where bundles dynamically add functionality. This modularity supports community-contributed extensions, fostering a vibrant ecosystem.
Advanced Automation and Integration
The duo delved into rule engines, where scripts automate responses. Examples included voice commands via integrations or mobile apps notifying users of anomalies. They showcased charts displaying energy consumption or environmental metrics, aiding in optimization.
Integration with external services, like weather APIs for proactive heating adjustments, illustrated openHAB’s extensibility.
User Interfaces and Accessibility
Kai demonstrated multiple UIs: web-based dashboards, mobile apps, and even voice assistants. The sitemap concept organizes controls intuitively, while HABPanel offers customizable widgets.
Thomas addressed security, recommending VPNs for remote access and encrypted communications.
Community and Future Developments
They noted the growing community, with over 500 installations and active contributors. Future plans include simplified binding creation guides, archetypes for new developers, and enhanced UIs like MGWT.
In Q&A, they discussed hardware support and integration challenges, encouraging participation.
Thomas and Kai’s presentation positioned openHAB as a powerful, developer-friendly platform for innovative home automation, blending Java prowess with real-world utility.
Links:
[DevoxxFR2013] Returns all active nodes with response times
Targeted actions follow:
mco service restart service=httpd -F osfamily=RedHat
This restarts Apache only on RedHat-based systems—in parallel across thousands of nodes. Filters support complex queries:
mco find -F country=FR -F environment=prod
MCollective plugins extend functionality: package installation, file deployment, or custom scripts. Security relies on SSL certificates and message signing, preventing unauthorized commands.
## Integrating Puppet and MCollective: A Synergistic Workflow
Pelisse combines both tools for full lifecycle management. Puppet bootstraps nodes—installing the MCollective agent during initial provisioning. Once enrolled, MCollective triggers Puppet runs on demand:
mco puppet runonce -I /web\d+.prod.fr/
This forces configuration convergence across matching web servers. For dependency-aware deployments, MCollective sequences actions:
1. Install database backend
2. Validate connectivity (via facts)
3. Deploy application server
4. Start services
Pelisse shares a real-world example: upgrading JBoss clusters. MCollective drains traffic from nodes, Puppet applies the new WAR, then MCollective re-enables load balancing—all orchestrated from a single command.
## Tooling Ecosystem: Foreman, Hiera, and Version Control
Foreman provides a web dashboard for Puppet—visualizing reports, managing node groups, and scheduling runs. It integrates with LDAP for access control and supports ENC (External Node Classifier) scripts to assign classes dynamically.
Hiera separates configuration data from logic, using YAML or JSON backends:
[DevoxxBE2013] MongoDB for JPA Developers
Justin Lee, a seasoned Java developer and senior software engineer at Squarespace, guides Java EE developers through the transition to MongoDB, a leading NoSQL database. With nearly two decades of experience, including contributions to GlassFish’s WebSocket implementation and the JSR 356 expert group, Justin illuminates MongoDB’s paradigm shift from relational JPA to document-based storage. His session introduces MongoDB’s structure, explores data mapping with the Java driver and Morphia, and demonstrates adapting a JPA application to MongoDB’s flexible model.
MongoDB’s schemaless design challenges traditional JPA conventions, offering dynamic data interactions. Justin addresses performance, security, and integration, debunking myths about data loss and injection risks, making MongoDB accessible for Java developers seeking scalable, modern solutions.
Understanding MongoDB’s Document Model
Justin introduces MongoDB’s core concept: documents stored as JSON-like BSON objects, replacing JPA’s rigid tables. He demonstrates collections, where documents vary in structure, offering flexibility over fixed schemas.
This approach, Justin explains, suits dynamic applications, allowing developers to evolve data models without migrations.
Mapping JPA to MongoDB with Morphia
Using Morphia, Justin adapts a JPA application, mapping entities to documents. He shows annotating Java classes to define collections, preserving object-oriented principles. A live example converts a JPA entity to a MongoDB document, maintaining relationships via references.
Morphia, Justin notes, simplifies integration, bridging JPA’s structured queries with MongoDB’s fluidity.
Data Interaction and Performance Tuning
Justin explores MongoDB’s query engine, demonstrating CRUD operations via the Java driver. He highlights performance trade-offs: write concerns adjust speed versus durability. A demo shows fast writes with minimal safety, scaling to secure, slower operations.
No reported data loss bugs, Justin assures, bolster confidence in MongoDB’s reliability for enterprise use.
Security Considerations and Best Practices
Addressing security, Justin evaluates injection risks. MongoDB’s query engine resists SQL-like attacks, but he cautions against $where clauses executing JavaScript, which could expose vulnerabilities if misused.
Best practices include sanitizing inputs and leveraging Morphia’s type-safe queries, ensuring robust, secure applications.
Links:
[DevoxxFR2013] Distributed DDD, CQRS, and Event Sourcing – Part 1/3: Time as a Business Core
Lecturer
Jérémie Chassaing is an architect at Siriona, focusing on scalable systems for hotel channel management. Author of thinkbeforecoding.com, a blog on Domain-Driven Design, CQRS, and Event Sourcing, he founded Hypnotizer (1999) for interactive video and BBCG (2004) for P2P photo sharing. His work emphasizes time-centric modeling in complex domains.
Abstract
Jérémie Chassaing posits time as central to business logic, advocating Event Sourcing to capture temporal dynamics in Domain-Driven Design. He integrates Distributed DDD, CQRS, and Event Sourcing to tackle scalability, concurrency, and complexity. Through examples like order management, Chassaing analyzes event streams over relational models, demonstrating eventual consistency and projection patterns. The first part establishes foundational shifts from CRUD to event-driven architectures, setting the stage for distributed implementations.
Time’s Primacy in Business Domains
Chassaing asserts time underpins business: reacting to events, analyzing history, forecasting futures. Traditional CRUD ignores temporality, leading to lost context. Event Sourcing records immutable facts—e.g., OrderPlaced, ItemAdded—enabling full reconstruction.
This contrasts relational databases’ mutable state, where updates erase history. Events form audit logs, facilitating debugging and compliance.
Domain-Driven Design Foundations: Aggregates and Bounded Contexts
DDD models domains via aggregates—consistent units like Order with line items. Bounded contexts delimit scopes, preventing model pollution.
Distributed DDD extends this to microservices, each owning a context. CQRS separates commands (writes) from queries (reads), enabling independent scaling.
CQRS Mechanics: Commands, Events, and Projections
Commands mutate state, emitting events. Handlers project events to read models:
case class OrderPlaced(orderId: UUID, customer: String)
case class ItemAdded(orderId: UUID, item: String, qty: Int)
// Command handler
def handle(command: AddItem): Unit = {
// Validate
emit(ItemAdded(command.orderId, command.item, command.qty))
}
// Projection
def project(event: ItemAdded): Unit = {
updateReadModel(event)
}
Projections denormalize for query efficiency, accepting eventual consistency.
Event Sourcing Advantages: Auditability and Scalability
Events form immutable logs, replayable for state recovery or new projections. This decouples reads/writes, allowing specialized stores—SQL for reporting, NoSQL for search.
Chassaing addresses concurrency via optimistic locking on aggregate versions. Distributed events use pub/sub (Kafka) for loose coupling.
Challenges and Patterns: Idempotency and Saga Management
Duplicates require idempotent handlers—e.g., check event IDs. Sagas coordinate cross-aggregate workflows, reacting to events and issuing commands.
Chassaing warns of “lasagna architectures”—layered complexity—and advocates event-driven simplicity over tiered monoliths.
Implications for Resilient Systems: Embracing Eventual Consistency
Event Sourcing yields antifragile designs: failures replay from logs. Distributed CQRS scales horizontally, handling “winter is coming” loads.
Chassaing urges rethinking time in models, shifting from mutable entities to immutable facts.
Links:
[DevoxxBE2012] Architecture All the Way Down
Kirk Knoernschild, a software developer passionate about modular systems and author of “Java Application Architecture,” explored the pervasive nature of architecture in software. Kirk, drawing from his book on OSGi patterns, challenged traditional views, arguing architecture permeates all levels—from high-level designs to code.
He invoked the “turtles all the way down” anecdote to illustrate architecture’s recursive essence: decisions at every layer impact flexibility. Kirk critiqued ivory-tower approaches, advocating collaborative, iterative practices aligning business and technology.
Paradoxically, architecture aims for change resistance yet adaptability. Temporal dimensions—decisions’ longevity—affect modularity: stable elements form foundations, volatile ones remain flexible.
Kirk linked SOA’s service granularity to modularity, noting services as deployable units fostering reuse. He emphasized patterns ensuring evolvability without rigidity.
Demystifying Architectural Paradoxes
Kirk elaborated on architecture’s dual goals: stability against volatility. He used examples where over-design stifles agility, advocating minimal upfront planning with evolutionary refinement.
Temporal hierarchies classify decisions by change frequency: strategic (years), tactical (months), operational (days). This guides layering: stable cores support variable extensions.
Granularity and Modularity Principles
Discussing granularity, Kirk warned against extremes: monolithic systems hinder reuse; overly fine-grained increase complexity. Patterns like base and dependency injection promote loose coupling.
He showcased OSGi’s runtime modularity, enforcing boundaries via exports/imports, preventing spaghetti code.
Linking Design to Temporal Decisions
Kirk connected design principles—SOLID—to temporal aspects: single responsibility minimizes change impact; open-closed enables extension without modification.
He illustrated with code: classes as small modules, packages as mid-level, OSGi bundles as deployable.
SOA and Modular Synergies
In SOA, services mirror modules: autonomous, composable. Kirk advocated aligning service boundaries with business domains, using modularity patterns for internal structure.
He critiqued layered architectures fostering silos, preferring vertical slices for cohesion.
Practical Implementation and Tools
Kirk recommended modular frameworks like OSGi or Jigsaw, but stressed design paradigms over tools. Patterns catalog aids designing evolvable systems.
He concluded: multiple communication levels—classes to services—enhance understanding, urging focus on modularity for adaptive software.
Kirk’s insights reframed architecture as holistic, from code to enterprise, essential for enduring systems.