Recent Posts
Archives

Archive for the ‘en-US’ Category

PostHeaderIcon [DevoxxBE2012] Rapid Application Development with Play 2

Peter Hausel, a core team member of the Play Framework and software engineer at Typesafe, demonstrated the efficiencies of Play 2 for swift web development in both Java and Scala. With a background in web technologies and open source, Peter showcased how Play streamlines workflows, emphasizing live coding to illustrate its features.

He initiated by creating a new Java-based application, highlighting Play’s MVC structure: controllers for logic, views for Scala-based templates, and routes for URL mapping. Peter noted that even Java projects use Scala templates, which require minimal learning akin to JSP.

A key advantage is on-the-fly compilation; changes reload automatically without restarts, accelerating iterations. Peter modified a controller and template, refreshing the browser to see instant updates.

Type-safe templates and routes prevent runtime errors, with compile-time checks ensuring correctness. He integrated front-end tools like CoffeeScript, LESS, and Google Closure Compiler, compiling them seamlessly into production assets.

Peter explored asset management, using RequireJS for modular JavaScript, and reverse routing to generate URLs dynamically, avoiding hardcoding.

For production, he packaged the app into a distributable ZIP, running it standalone. He peeked into running applications via REPL for interactive debugging.

Testing was touched upon, with future scaffolding promised to generate tests easily.

Peter’s demo underscored Play’s design for productivity, blending familiarity with powerful abstractions.

Core Structure and Development Workflow

Peter detailed Play’s layout: app folder houses controllers, views, and assets; conf holds configurations and routes. Routes define HTTP methods, paths, and controllers, supporting parameters for dynamic handling.

In live coding, he added a new route and controller method, demonstrating error handling where compilation failures display directly in the browser, guiding fixes.

Templates use Scala syntax for logic, with type safety catching mismatches early. Peter included layouts via @main, composing reusable structures.

Front-End Integration and Asset Handling

Play compiles CoffeeScript and LESS on-the-fly, minifying with Closure for optimization. Peter added a CoffeeScript file, seeing it transform into JavaScript.

RequireJS integration modularizes scripts, with routes serving them efficiently.

Deployment and Advanced Features

Packaging creates self-contained distributions, runnable via commands. REPL allows inspecting live state.

Peter recapped: routes centralize mapping, auto-reload speeds development, and reverse routing enhances maintainability.

His session positioned Play as a tool for rapid, robust applications.

Links:

PostHeaderIcon [DevoxxFR2013] Keynote “Ouverture”: Welcoming Devoxx France 2013

Lecturer

Nicolas Martignole is an independent consultant and founder of Innoteria, with over a decade of experience in Java. He specializes in architecture, team coaching, and project management, having implemented Scrum at a major French bank since 2008 and previously at Thomson-Reuters as a senior developer and project manager. He authors the blog “Le Touilleur Express.”

Antonio Goncalves is a senior architect consulting and training on Java technologies. Formerly a Weblogic consultant at BEA Systems, he has focused on software architecture since 1998. Author of “Java EE 5” (Eyrolles) and “Beginning Java EE 6 Platform With GlassFish 3” (Apress), he contributes to JCP on Java EE 6, JPA 2.0, and EJB 3.1. He teaches at the Conservatoire National des Arts et Métiers and co-founded the Paris Java User Group.

Zouheir Cadi is an independent consultant specializing in Java/JEE technologies. After years in development, he serves as a production architect, bridging development and operations. Currently at France’s top e-commerce site, he is a Paris JUG board member and Devoxx France co-founder.

José Paumard, passionate about programming for over 20 years, transitioned from assembler and C to Java. An assistant professor at Paris University for 15 years with a PhD in applied mathematics and computer science, he blogs at “Java le soir,” a key French resource on Java. A Paris JUG member, he co-organizes Devoxx France and speaks at conferences like Devoxx and JavaOne.

Abstract

This article examines the opening keynote of Devoxx France 2013, delivered by Nicolas Martignole, Antonio Goncalves, Zouheir Cadi, and José Paumard. It contextualizes the event’s growth, organizational challenges, and community focus, analyzing session selection, special events, and thematic keynotes on past, present, and future of the industry. The discussion highlights transparency in call-for-papers, sponsor contributions, and efforts to engage diverse audiences, underscoring Devoxx’s role in fostering Java and broader tech ecosystems.

Event Overview and Growth Trajectory

The keynote commences with warm welcomes, acknowledging the team’s efforts in hosting Devoxx France 2013, a three-day event with 180 speakers, 75% French-language content. Martignole notes the expansion from 1,250 attendees in the inaugural edition to 1,400, a 220-person increase, signaling strong community interest. This growth mirrors the Devoxx family’s international success: 3,500 in Belgium and 500 for the UK’s debut, organized in just four months.

A satisfaction survey from the previous year, with 301 responses, informs improvements. Despite its length (81% found it too long, 18% much too long), it guides refinements, with Goncalves promising quality control for future iterations. The emphasis on constructive feedback, especially negatives, reflects a commitment to attendee-driven evolution.

Practical details include a free “Meet and Greet” evening with wine and cheese, sponsored by SonarSource, Atlassian, and CloudBees, running until 22:00. Six rooms host sessions, with overflow managed by red-vested volunteers for safety. All talks are recorded on Parleys.com, alleviating concerns about missing content. Community integration is highlighted, welcoming groups like Paris JS and Scala User Group.

Organizational Transparency and Session Selection

Transparency in the call-for-papers process is a focal point. Opened November 20 and closed January 31, it garnered 572 submissions, accepting only 162 due to venue constraints. Conferences (50-minute slots) saw 320 proposals, with 74 accepted; 14 allocated to premium sponsors, leaving 60 for general selection – an 82% rejection rate.

A 10-person team, including volunteers, rigorously evaluated submissions, using notes (0-5 scale), discussions, pizzas, and color-coded Post-its. Goncalves humorously notes resorting to a cat for ties, underscoring the process’s seriousness despite challenges. Rejected speakers are encouraged to reapply or present at local JUGs, emphasizing inclusivity.

The keynote theme – past, present, future – features speakers like Clarisse Herrenschmidt on writing history, Martin Odersky on objects and functionals, Alexis Moussine-Pouchkine on Java’s trajectory, and Habib Guergachi on web architectures. Odersky’s evening BoF is noted for Scala enthusiasts.

Special Initiatives and Community Engagement

Unique events differentiate Devoxx: “Devoxx for Kids,” led by Audrey Neveux, introduced 70 children to programming via robots the previous day, aiming to demystify parents’ professions. Though not repeatable annually due to school changes, it inspires future iterations alongside Belgium’s multilingual versions.

“Open Source Hacking” with Brice Dutheil and Mathieu Ancelin offers hands-on contribution. The “Afternoon for Decision-Makers,” from 14:00-18:15, mixes genres with CIOs discussing cloud, prepared by Arnaud Héritier. Reserved seats accommodate hard-to-book executives, but it’s open to all.

“Code Story,” by David Gageot and Jean-Laurent de Morlhon, features full-day live coding in a basement room. “Mercenaries of DevOps,” with Pierre-Antoine Grisoni, Henry Gomez, and others, explores native packaging and Kanban boards the next day.

Sponsors receive gratitude: premium partners enable affordable tickets; Oxiane handles training for over half attendees, managing complex dossiers. Medium and base sponsors filled slots quickly, with full exhibition halls praised for embodying Devoxx spirit.

In summation, the keynote reinforces Devoxx as a collaborative hub, blending education, networking, and innovation to advance the Java community and beyond.

Relevant Links and Hashtags

Links:

PostHeaderIcon [DevoxxFR2013] Web Oriented Architecture: A Transmutation of Practices in Building Information Systems

Lecturer

Habib Guergachi is a Centrale graduate and the CEO and co-founder of Zenexity, established in 2003. As an expert in urbanization, he is among the French architects who introduced key concepts such as the “integrability coefficient” of applications. His professional background includes over seven years at the Central IT Department of AXA, more than three years at the IT Strategy Department of Société Générale, and five years on the executive committee and Technical Direction of a major IT services company. Currently, he leads large-scale urbanization projects and transformations of information systems toward web-oriented models. He also conducts seminars at the prestigious Capgemini Institute.

Abstract

This article explores Habib Guergachi’s lecture on Web Oriented Architecture (WOA), which critiques traditional enterprise practices and advocates for a shift to distributed, hyper-scalable systems. Drawing from historical analogies and real-world examples like LinkedIn, Guergachi argues for abandoning monolithic architectures in favor of independent, reactive applications that leverage modern web protocols. The discussion analyzes the implications for software development, emphasizing innovation, scalability, and the rejection of outdated paradigms to ensure future competitiveness in the French and global IT landscape.

Contextualizing the Shift from Hyper-Integrated to Hyper-Distributed Systems

Guergachi begins by drawing a parallel between the decline of traditional industries, such as steel mills like Gandrange and Florange, and the potential obsolescence of current software engineering practices. He posits that modern IT specialists, akin to specialized workers in software factories, risk irrelevance if they fail to innovate. The core dilemma is the overemphasis on hyper-integrated systems, where enterprises purchase off-the-shelf software that imposes architectures dictated by vendors. This leads to rigid, costly structures that stifle adaptability.

In contrast, Guergachi introduces the concept of hyper-distributed architectures, inspired by web-oriented principles. He illustrates this with a cultural anecdote: a hypothetical Chinese viewer searching for a French film clip on ina.fr fails due to rigid, integrated search mechanisms, while Google succeeds through flexible, distributed intelligence. This highlights how integrated systems, often built around enterprise architecture, application servers, and service buses, create “bousins” – complex, unmaintainable assemblages of tools like CMS for content, transactional plugins, and adapters for legacy JSF applications.

The lecturer critiques the inefficiency of such systems, where decision-making processes involve dumping data into warehouses for analysis, rather than fostering real-time adaptability. He urges a generational shift: respecting past achievements that built foundational information systems but making way for younger developers to construct future-proof ones. Avoiding the trap of using ingenuity merely to bypass imposed integrations is crucial, as technological evolution accelerates.

Principles of Distributed Paradigms and Real-World Implementations

Central to Guergachi’s thesis is the advocacy for distributed paradigms over integrated ones. He references Play Framework, a French-origin technology (despite initial skepticism due to its nationality), as a tool for building independent applications. LinkedIn’s approach exemplifies this: constructing systems as separate components, each focused on core business logic, deployed autonomously. These might use various technologies but prioritize scalability, security, and reactivity.

In a distributed model, non-core functions like search are outsourced to specialized services, allowing internal applications to remain focused and resilient under load. Guergachi explains techniques such as eventual consistency for high-load scenarios and strict consistency only where necessary, like payment processing. Communication between applications relies on RESTful hypermedia over HTTP, rejecting heavy RPC protocols or plugins like Flash, which he derides as symptomatic of a “third-world syndrome” – adopting external technologies without deep understanding.

He envisions enterprises concentrating solely on core business, externalizing storage, CMS, back-office, and video management to superior providers. Performance concerns with HTTP are dismissed as psychological barriers; no in-house solution can compete with storage specialists. Applications will interconnect in a “spaghetti” manner, but one that ensures predictability and adaptability, mirroring the web’s organic structure.

Guergachi introduces entropy as a metaphor: solid (rigid, controlled architectures), liquid (flexible, scalable across servers), and gaseous (pervasive, occupying value chain interstices like Google). Enterprises must evolve toward gaseous states for survival, contrasting with legacy systems that “suck blood” through perpetual maintenance fees.

Implications for Innovation and the Role of French IT Genius

The lecturer delineates integrated paradigms – building overarching technical architectures without functional hypotheses, aiming for longevity – as flawed, akin to overpacking for unforeseen disasters. Distributed paradigms, conversely, tailor architectures per application, prioritizing functional solutions over technical absolutes. For instance, displaying cached content during network failures ensures usability, decided by business logic rather than rigid transactional rules.

A paradigm, per Guergachi, is a coherent worldview offering solutions to its own problems. He warns against half-measures, like deploying advanced frameworks on outdated servers, urging full commitment to distributed models despite risks like dismissal. Submitting to vendor-driven technologies prepares for shameful obsolescence, whereas bold shifts enable glory through innovation.

Critiquing entities like INPI’s outdated systems, he highlights France’s image issues, comparable to 1980s Korean cars. French IT genius, exemplified by talents like Guillaume Bort and Sadek Drobi, must harness business acumen. Concepts like Scalaz originated in France (at Camel), underscoring untapped potential.

The economy of the web remains to be fully realized; Silicon Valley leads but hasn’t won. French informatics must act through innovation serving functionality, user experience, and distributed architectures with increasing entropy. Mastering interconnection complexity yields value, constructing planetary software masterpieces to safeguard jobs and elevate France.

In conclusion, Guergachi’s call to action – rebooting mindsets Monday morning – emphasizes radical change for continuity. By embracing WOA, developers transcend traditional constraints, fostering systems that are open, secure, adaptable, and cost-effective, aligning with business evolutions.

Relevant Links and Hashtags

Links:

PostHeaderIcon [DevoxxFR2012] Client/Server Apps with HTML5 and Java

Lecturer

James Ward embodies the archetype of the polyglot developer evangelist, bringing a wealth of experience that spans nearly two decades of Java development alongside deep expertise in web technologies and cloud platforms. Having started coding in Java as early as 1997, James initially cut his teeth on server-side frameworks before joining Adobe, where he championed Flex as a rich client technology and contributed to Java Community Process specifications including JSR 286, 299, and 301 for portlet standards. His tenure at Adobe honed his ability to bridge desktop-like experiences with web delivery, a theme that permeates his later work. By 2012, James had transitioned to Heroku as a Developer Advocate, where he focused on democratizing cloud deployment and promoting modern, API-driven architectures. Much like his passion for mountain climbing—which he often analogizes to the challenges of software development—James approaches technical problems with a blend of strategic planning and relentless iteration, seeking elegant solutions amid complex terrain. His presentations are characteristically hands-on, featuring live coding demonstrations that translate abstract concepts into tangible artifacts, making him a trusted voice in the Java and web development communities.

Abstract

James Ward articulates a compelling vision for the resurgence of client/server architectures in web development, leveraging the browser as a sophisticated client powered by HTML5, JavaScript, CSS, and complementary tools, while employing Java-based servers to deliver lightweight, API-centric services. Through an end-to-end live coding session, James demonstrates how to orchestrate a modern stack comprising jQuery for DOM manipulation, LESS for dynamic styling, Twitter Bootstrap for responsive design, CoffeeScript for concise scripting, and the Play Framework for robust backend services. He extends the discussion to cloud-native deployment strategies, utilizing Heroku for application hosting and Amazon CloudFront as a Content Delivery Network to optimize static asset delivery. The presentation meticulously analyzes the methodological advantages of this decoupled approach—enhanced responsiveness, independent release cycles, and superior scalability—while addressing practical concerns such as asset management through WebJars and performance optimization. James positions this architecture as the future of web applications, particularly for mobile-first, global audiences, offering profound implications for development velocity, maintenance overhead, and user experience in an increasingly heterogeneous device landscape.

The Renaissance of Client/Server: From Server-Rendered Monoliths to Decoupled Experiences

James Ward opens with a historical reflection on web architecture evolution, observing that after a decade dominated by server-side rendering frameworks such as JSP, JSF, and Ruby on Rails templates, the pendulum is swinging back toward a client/server model reminiscent of early thin-client applications—but now enriched by the browser’s matured capabilities. In this paradigm, the browser assumes responsibility for rendering rich, interactive interfaces using HTML5, CSS3, and JavaScript, while the server is reduced to a stateless API provider delivering JSON or other data formats over HTTP. This shift, James argues, is propelled by the proliferation of smartphones and tablets, which demand native-like responsiveness and offline functionality that server-rendered pages struggle to deliver efficiently. HTML5 standards—local storage, Web Workers, Canvas, and progressive enhancement—enable applications to function seamlessly across devices without native code, while responsive design principles ensure adaptability to varying screen sizes. James contrasts this with traditional approaches where server-side templates intertwine presentation and logic, creating tight coupling that complicates maintenance and slows iteration. By decoupling concerns, developers can evolve the user interface independently of backend changes, a flexibility that proves invaluable in agile environments where user feedback drives rapid UI refinements.

Front-End Ecosystem: Orchestrating Productivity with CoffeeScript, LESS, and Bootstrap

Delving into the client-side stack, James Ward conducts a live coding demonstration that showcases how modern tools dramatically amplify developer productivity while maintaining code quality. He begins with CoffeeScript, a language that compiles to JavaScript and eliminates much of the verbosity and pitfalls associated with raw JS syntax. By writing expressions such as square = (x) -> x * x, CoffeeScript generates clean, idiomatic JavaScript, reducing boilerplate and enhancing readability. James emphasizes that CoffeeScript’s significant whitespace and functional programming influences encourage a more declarative style, which aligns naturally with event-driven browser programming. Complementing this, LESS extends CSS with variables, mixins, and nested rules, transforming style sheets into programmable artifacts. For instance, defining @brand-color: #428bca; and reusing it across selectors eliminates duplication and facilitates theme switching. Twitter Bootstrap enters the equation as a comprehensive UI framework, providing pre-styled components—navigation bars, modals, grids—and a responsive grid system based on media queries. James demonstrates how a simple <div class="container"> with Bootstrap classes automatically adapts layout from desktop to mobile, obviating custom media query sprawl. Within the Play Framework, these assets are served through a unified pipeline that compiles CoffeeScript and LESS on-the-fly during development and minifies them for production, ensuring optimal performance without manual intervention. This orchestrated ecosystem, James asserts, enables small teams to deliver polished, professional interfaces in a fraction of the time required by hand-crafted HTML and CSS.

Backend as API: Play Framework’s Elegance in Service Design

On the server side, James Ward positions the Play Framework as an exemplary choice for building lightweight, stateless APIs that complement rich clients. Play’s Scala-based syntax offers concise controller definitions, where a route such as GET /api/tasks controllers.Tasks.list maps directly to a method returning JSON via Ok(Json.toJson(tasks)). This simplicity belies powerful features: asynchronous request handling via Akka actors, built-in JSON serialization, and seamless WebSocket support for real-time updates. James live-codes a task management endpoint that accepts POST requests with JSON payloads, validates them using Play’s form mapping, and persists to an in-memory store—illustrating how quickly a functional API can be prototyped. Client-side consumption is equally straightforward; jQuery’s $.ajax or the Fetch API retrieves data and dynamically renders it using Bootstrap templates. James highlights Play’s hot-reloading capability, where code changes trigger instant server restarts during development, eliminating the compile-deploy cycle that plagues traditional Java web applications. For persistence, while the demo uses in-memory storage, James notes seamless integration with relational databases via Anorm or JPA, and NoSQL options through third-party modules, underscoring Play’s versatility across data models.

Cloud-Native Deployment: Heroku and CDN Synergy

Deployment emerges as a cornerstone of James Ward’s vision, and he demonstrates the effortless path from local development to global production using Heroku and Amazon CloudFront. A simple heroku create followed by git push heroku master triggers an automated build process that compiles assets, runs tests, and deploys the application to a dynamically scaled cluster of dynos. Heroku’s add-on ecosystem provides managed PostgreSQL, Redis, and monitoring without operational overhead, embodying Platform-as-a-Service ideals. For static assets—JavaScript, CSS, images—James configures CloudFront as a CDN, caching content at edge locations worldwide to reduce latency and offload server load. Configuration involves setting cache headers in Play and pointing DNS to the CloudFront distribution, a process that takes minutes yet yields significant performance gains. James emphasizes versioning strategies: semantic versioning for APIs combined with cache-busting query parameters for assets ensures smooth upgrades without stale content issues. This cloud-native approach not only accelerates time-to-market but also aligns cost with usage, as Heroku scales dynos automatically and CloudFront bills per byte transferred.

Asset Management Revolution: WebJars and Dependency Convergence

A particularly innovative contribution James Ward introduces is WebJars, a convention for packaging client-side libraries—jQuery, Bootstrap, lodash—as standard JAR files consumable via Maven or Gradle. By declaring <dependency><groupId>org.webjars</groupId><artifactId>bootstrap</artifactId><version>5.3.0</version></dependency> in a POM, developers integrate front-end assets into the same dependency resolution pipeline as server-side libraries, eliminating the chaos of manually downloading and versioning scripts. Play’s asset pipeline automatically extracts these resources to the classpath, making them available via routes.Assets.at("lib/bootstrap/js/bootstrap.min.js"). James demonstrates creating a custom WebJar the night before the talk, packaging both minified and source versions, and stresses the importance of avoiding the historical mistake of scattering JARs in source trees. This unification streamlines builds, enables reproducible environments, and facilitates security patching through centralized dependency updates.

Methodological and Architectural Implications for Modern Web Development

James Ward synthesizes the architectural benefits of this client/server separation, noting that independent release cycles allow frontend teams to iterate on user experience without backend coordination, and vice versa. Responsive design via Bootstrap future-proofs applications against new device form factors, while HTML5’s progressive enhancement ensures graceful degradation on older browsers. Performance considerations—minification, concatenation, CDN caching—combine to deliver sub-second load times even on mobile networks. James addresses testing strategies: Jasmine for client-side unit tests, Specs2 for server-side, and Selenium for end-to-end flows, all integrated into the build pipeline. API versioning through URL paths or headers maintains backward compatibility as schemas evolve. The implications are profound: development velocity increases dramatically, maintenance burden decreases through modularization, and user satisfaction rises with fluid, native-like experiences. For enterprises transitioning from legacy portals, James advocates gradual migration—exposing existing services as JSON APIs while incrementally replacing UI with modern client code—minimizing risk while capturing immediate benefits.

Future Trajectories and Community Considerations

Looking ahead, James Ward anticipates continued maturation of HTML5 standards, broader adoption of Web Components for encapsulation, and potential successors to JavaScript such as Dart or WebAssembly, though he expresses skepticism about near-term displacement given browser ecosystem inertia. Tools like GWT, which compile Java to JavaScript, are acknowledged as viable alternatives for Java-centric teams, but James personally favors direct control over client-side code for maximum flexibility. The presentation closes with an invitation to contribute to WebJars and engage with the Play community, reinforcing open-source collaboration as a catalyst for innovation.

Links:

PostHeaderIcon [DevoxxBE2012] The Advantage of Using REST APIs in Portal Platforms to Extend the Reach of the Portal

Rinaldo Bonazzo, a seasoned IT professional with extensive experience in project management and technology evangelism for Entando, highlighted the strategic benefits of integrating REST APIs into portal platforms during his presentation. Rinaldo, who has led initiatives in sectors like animal health and European community projects, emphasized how Entando, an open-source Java-based portal, leverages REST to facilitate seamless data exchange across diverse systems and devices.

He began by outlining Entando’s capabilities as a comprehensive web content management system and framework, enabling developers to build vertical applications efficiently. Rinaldo explained the decision to adopt JSR-311 (now part of Java EE 6) for RESTful services, which allows Entando to connect with external clients effortlessly. This approach minimizes development effort, as REST standardizes interactions using lightweight protocols like JSON or XML, making integration with web clients, smartphones, and tablets straightforward.

In a practical demonstration, Rinaldo showcased creating a service to publish open data across multiple devices. He illustrated how REST APIs provide a base URI for accessing resources, such as content, images, or entities, without the overhead of more complex protocols. This not only accelerates development but also ensures that portals can reach beyond traditional boundaries, fostering broader adoption within organizations.

Rinaldo stressed the importance of REST in modern architectures, where portals must interact with sensors, mobile apps, third-party services like BI tools or CRM systems, and even legacy applications. By collecting data from various sources—such as IoT devices in smart cities or user inputs from mobile forms—Entando exposes this information uniformly, supporting web browsers, extranets, and accessibility features for users with disabilities.

He shared real-world examples from Entando’s deployments, including portals for the Italian Civil Defense Department and the Ministry of Justice. These implementations prioritize accessibility, ensuring compliance with standards that allow visually impaired users to access content. Rinaldo pointed to the municipality of Cerea’s open data initiative, where REST APIs enable developers to retrieve resources like georeferenced data or submit requests via mobile apps, demonstrating practical extensions of portal functionality.

Furthermore, Rinaldo discussed security aspects, noting Entando’s use of OAuth for authorization, which secures API access with tokens. This ensures safe data exchange while maintaining openness.

Overall, Rinaldo’s insights underscored how REST APIs transform portals from isolated systems into interconnected hubs, enhancing reach and utility. By adhering to established standards, developers can innovate rapidly, integrating portals with emerging technologies and meeting diverse user needs effectively.

Extending Portal Functionality Through Integration

Rinaldo elaborated on the architectural advantages, where REST enables portals to act as central data aggregators. For instance, in smart city applications, APIs collect sensor data for traffic management, which portals then process and disseminate. Similarly, mobile integrations allow direct content insertion, as seen in Maxability’s iPhone app for Entando, where users submit georeferenced photos that portals geolocate and manage.

He highlighted government successes in Italy, where Entando’s portals support critical operations while ensuring inclusivity. Features like API documentation pages, as in Cerea’s developer portal, provide clear guidance on endpoints, methods, and parameters, lowering barriers for external developers.

Rinaldo concluded by inviting engagement with Entando’s community, reinforcing that REST not only extends reach but also promotes collaborative ecosystems. His presentation illustrated a shift towards open, extensible platforms that adapt to evolving digital landscapes.

Links:

PostHeaderIcon How to know which versions of Xerces and Xalan are run in the JDK?

Run this command:

[java]java com.sun.org.apache.xalan.internal.xslt.EnvironmentCheck[/java]

PostHeaderIcon [DevoxxFR2012] Input/Output: 16 Years Later – Advancements in Java’s I/O Capabilities

Lecturer

Jean-Michel Doudoux has pursued a lifelong passion for software development and technological exploration, starting from his early experiences with a Commodore 64. His professional path includes extensive work in SSII settings and independent projects involving multiple programming languages. Embracing Java since its 1.0 release, Jean-Michel has become a prominent educator in the field, authoring two comprehensive tutorials distributed under the GNU FDL license. One focuses on Eclipse, spanning roughly 600 pages, while the other, which he continues to update, covers Java in over 100 chapters and 2400 pages. As co-founder of the Lorraine JUG and a member of YaJUG, he actively engages with the Java community through organization of events and sharing of resources. His website hosts these tutorials and related materials.

Abstract

Jean-Michel Doudoux provides an in-depth review of input/output (I/O) evolution in Java, from the initial java.io package in Java 1.0 to the sophisticated NIO.2 framework introduced in Java 7. He dissects the shortcomings of earlier APIs in handling files, asynchronous operations, and metadata, showcasing how NIO.2 introduces efficient, extensible solutions. With extensive code demonstrations and comparative evaluations, Doudoux covers path operations, attribute management, permissions, directory navigation, and event notifications. The analysis reveals methodological improvements for performance and cross-platform consistency, with significant ramifications for data-intensive applications, server-side processing, and modern software engineering practices.

Early Java I/O: Streams and Initial Constraints

Jean-Michel Doudoux commences by outlining the foundational I/O mechanisms in Java, rooted in the java.io package since version 1.0. This API centered on streams for byte or character data, offering basic functionality for reading and writing but suffering from synchronous blocking that hindered scalability in networked or multi-threaded scenarios. The File class served as the primary interface for filesystem interactions, yet it was limited to simple operations like existence checks, deletion, and renaming, without support for symbolic links or detailed metadata.

Doudoux highlights how paths were treated as mere strings, prone to errors from platform-specific separators—forward slashes on Unix-like systems versus backslashes on Windows. Creating directories required manual verification of parent existence, and attribute access like last modification time necessitated platform-dependent native code, compromising portability. These constraints often forced developers to resort to external libraries for routine tasks, such as recursive directory deletion or efficient copying.

The introduction of NIO in Java 1.4 marked a step forward with channels and buffers for non-blocking I/O, particularly beneficial for socket communications. However, Doudoux notes that filesystem capabilities remained underdeveloped; RandomAccessFile allowed position-based access, but asynchronous file operations were absent. This historical overview underscores the need for a more comprehensive API, setting the foundation for NIO.2’s contributions.

NIO.2’s Core Abstractions: Filesystems, Paths, and Providers

Doudoux proceeds to the heart of NIO.2, encapsulated in the java.nio.file package, which abstracts filesystems through the FileSystem interface and its implementations. This design allows uniform treatment of local disks, ZIP archives, or even custom virtual filesystems, greatly improving code reusability across environments.

Path objects, created via Paths.get(), represent locations independently of the underlying filesystem. Doudoux explains relativization—computing paths between locations—and resolution—combining paths—while normalization eliminates redundancies like “.” or “..”. For instance, resolving a relative path against an absolute one yields a complete location, mitigating common string concatenation pitfalls.

Path base = Paths.get("/projects/devoxx");
Path relative = Paths.get("slides/part3.pdf");
Path fullPath = base.resolve(relative); // /projects/devoxx/slides/part3.pdf
Path cleaned = Paths.get("/projects/./devoxx/../archive").normalize(); // /projects/archive

Filesystem providers extend this flexibility: the default handles local files, but NewFileSystem mounts alternatives like ZIPs. Doudoux codes an example:

Map<String, String> env = new HashMap<>();
env.put("create", "true"); // Create if absent
URI uri = URI.create("jar:file:/data/archive.zip");
FileSystem zipFs = FileSystems.newFileSystem(uri, env);

This enables treating archives as navigable directories, useful for bundled resources or logs. Custom providers could integrate remote storage like FTP or cloud services, expanding Java’s reach in distributed systems.

Managing File Attributes, Permissions, and Security

NIO.2 revolutionizes attribute access, Doudoux asserts, through Files methods and attribute views. BasicFileAttributes exposes universal properties like creation time, size, and directory status, while platform-specific views like PosixFileAttributes add owner and group details.

Reading attributes uses getAttribute() or readAttributes():

Path file = Paths.get("/logs/app.log");
BasicFileAttributes attrs = Files.readAttributes(file, BasicFileAttributes.class);
long size = attrs.size();
Instant lastModified = attrs.lastModifiedTime().toInstant();

Permissions leverage AclFileAttributeView for Windows ACLs or PosixFilePermissions for Unix-like systems:

Path secureFile = Paths.get("/config/secrets.properties");
PosixFileAttributeView posixView = Files.getFileAttributeView(secureFile, PosixFileAttributeView.class);
posixView.setPermissions(PosixFilePermissions.fromString("rw-------"));
posixView.setOwner(UserPrincipalLookupService.lookupPrincipalByName("admin"));

Doudoux stresses platform awareness: not all attributes are supported everywhere, requiring fallback strategies. This granularity enhances security in enterprise applications, allowing fine-tuned access control without external tools.

Efficient Directory Operations and Event Monitoring

Directory traversal in NIO.2 employs Files.walk() for streams or walkFileTree() with FileVisitor for customized walks. Doudoux details visitor callbacks—preVisitDirectory, visitFile, postVisitDirectory, visitFileFailed—enabling actions like filtering or error recovery.

For copying directories:

Path sourceDir = Paths.get("/old/project");
Path targetDir = Paths.get("/new/project");
Files.walkFileTree(sourceDir, EnumSet.of(FileVisitOption.FOLLOW_LINKS), Integer.MAX_VALUE, new SimpleFileVisitor<Path>() {
  @Override
  public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {
    Files.createDirectory(targetDir.resolve(sourceDir.relativize(dir)));
    return FileVisitResult.CONTINUE;
  }
  @Override
  public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
    Files.copy(file, targetDir.resolve(sourceDir.relativize(file)));
    return FileVisitResult.CONTINUE;
  }
});

This handles symlinks and errors gracefully. Monitoring uses WatchService: register paths for events (create, delete, modify), poll for keys, process events.

WatchService watcher = FileSystems.getDefault().newWatchService();
Path monitoredDir = Paths.get("/watched/folder");
monitoredDir.register(watcher, StandardWatchEventKinds.ENTRY_CREATE, StandardWatchEventKinds.ENTRY_MODIFY, StandardWatchEventKinds.ENTRY_DELETE);

while (true) {
  WatchKey key = watcher.take();
  for (WatchEvent<?> event : key.pollEvents()) {
    Path changed = (Path) event.context();
    // React to changed
  }
  boolean valid = key.reset();
  if (!valid) break;
}

Doudoux notes event granularity varies by OS, requiring application-level filtering.

Asynchronous I/O Channels: Non-Blocking for Scalability

NIO.2’s asynchronous channels—AsynchronousFileChannel, AsynchronousSocketChannel—facilitate non-blocking operations, Doudoux explains. Use futures for blocking waits or CompletionHandlers for callbacks.

AsynchronousFileChannel asyncChannel = AsynchronousFileChannel.open(path, StandardOpenOption.READ);
ByteBuffer buf = ByteBuffer.allocate(4096);
Future<Integer> readFuture = asyncChannel.read(buf, 0);
int bytesRead = readFuture.get(); // Blocks until complete

Handlers:

asyncChannel.read(buf, 0, null, new CompletionHandler<Integer, Void>() {
  @Override
  public void completed(Integer result, Void attachment) {
    // Process buffer
  }
  @Override
  public void failed(Throwable exc, Void attachment) {
    // Handle error
  }
});

Doudoux analyzes scalability: servers handle thousands of connections without thread blocking, ideal for high-throughput systems like web servers or data pipelines.

Broader Impacts on Java Ecosystems and Development Practices

Doudoux reflects on NIO.2’s transformative effects: it standardizes operations previously requiring hacks, promoting cleaner code. For portability, abstracted filesystems ease multi-platform development; for performance, asynchronous I/O reduces latency in I/O-bound apps.

In enterprises, this means efficient log monitoring or data migration. Doudoux acknowledges low-level aspects require care but praises extensibility for future integrations like cloud storage.

Overall, NIO.2 modernizes Java I/O, aligning it with contemporary demands for efficiency and flexibility.

Links:

PostHeaderIcon [DevoxxFR2012] There Is No Good Business Model: Rethinking Domain Modeling for Service-Oriented Design and Implementation

Lecturer

Grégory Weinbach has cultivated more than twenty years of experience in software development, spanning a diverse spectrum of responsibilities that range from sophisticated tooling and code generation frameworks to agile domain modeling and the practical application of Domain Driven Design principles. His professional journey reflects a deliberate pursuit of polyvalence, enabling him to operate fluidly across the entire software development lifecycle—from gathering nuanced user requirements to implementing robust, low-level solutions. Grégory maintains a discerning and critical perspective on prevailing methodologies, whether they manifest as Agile practices, Model-Driven Architecture, Service-Oriented Architecture, or the contemporary Software Craftsmanship movement, always prioritizing the fundamental question of “why” before addressing the mechanics of “how.” He is a frequent speaker at various technical forums, including the Paris Java User Group and all five editions of the MDDay conference, and regularly conducts in-depth seminars for enterprise clients on pragmatic modeling techniques that balance theoretical rigor with real-world applicability.

Abstract

Grégory Weinbach delivers a provocative and intellectually rigorous examination of a widely held misconception in software design: the notion that a “good” domain model must faithfully mirror the intricacies of the underlying business reality. He argues persuasively that software systems do not replicate the business world in its entirety but rather operationalize specific, value-delivering services within constrained computational contexts. Through a series of meticulously constructed case studies, comparative analyses, and conceptual diagrams, Grégory demonstrates how attempts to create comprehensive, “truthful” business models inevitably lead to bloated, inflexible codebases that become increasingly difficult to maintain and evolve. In contrast, he advocates for a service-oriented modeling approach where domain models are deliberately scoped, context-bound artifacts designed to support concrete use cases and implementation requirements. The presentation delves deeply into the critical distinction between business models and domain models, the strategic use of bounded contexts as defined in Domain Driven Design, and practical techniques for aligning technical architectures with organizational service boundaries. The implications of this paradigm shift are profound, encompassing reduced developer cognitive load, enhanced system evolvability, accelerated delivery cycles, and the cultivation of sustainable software architectures that remain resilient in the face of changing business requirements.

The Fallacy of Universal Truth: Why Business Reality Cannot Be Fully Encapsulated in Code

Grégory Weinbach commences his discourse with a bold and counterintuitive assertion: the persistent belief that effective software modeling requires a direct, isomorphic mapping between code structures and real-world business concepts represents a fundamental and pervasive error in software engineering practice. He elucidates that while business models—typically expressed through process diagrams, organizational charts, and natural language descriptions—serve to communicate and analyze human activities within an enterprise, domain models in software exist for an entirely different purpose: to enable the reliable, efficient, and maintainable execution of specific computational tasks. Attempting to construct a single, monolithic model that captures the full complexity of a business domain inevitably results in an unwieldy artifact that attempts to reconcile inherently contradictory perspectives, leading to what Weinbach terms “model schizophrenia.” He illustrates this phenomenon through a detailed examination of a retail enterprise scenario, where a unified model encompassing inventory management, customer relationship management, financial accounting, and regulatory compliance creates a labyrinthine network of interdependent entities. A modification to inventory valuation rules, for instance, might inadvertently cascade through customer segmentation logic and tax calculation modules, introducing subtle bugs and requiring extensive regression testing across unrelated functional areas.

Bounded Contexts as Cognitive and Architectural Boundaries: The Domain Driven Design Solution

Building upon Eric Evans’ foundational concepts in Domain Driven Design, Weinbach introduces bounded contexts as the primary mechanism for resolving the contradictions inherent in universal modeling approaches. A bounded context defines a specific semantic boundary within which a particular model and its associated ubiquitous language hold true without ambiguity. He argues that each bounded context deserves its own dedicated model, even when multiple contexts reference conceptually similar entities. For example, the notion of a “customer” within a marketing analytics context—characterized by behavioral attributes, segmentation tags, and lifetime value calculations—bears little structural or behavioral resemblance to the “customer” entity in a legal compliance context, which must maintain immutable audit trails, contractual obligations, and regulatory identifiers. Weinbach presents a visual representation of these distinct contexts, showing how the marketing model might employ lightweight, denormalized structures optimized for analytical queries, while the compliance model enforces strict normalization, versioning, and cryptographic signing. This deliberate separation not only prevents the contamination of precise business rules but also enables independent evolution of each model in response to domain-specific changes, dramatically reducing the blast radius of modifications.

Service-Oriented Modeling: From Abstract Nouns to Deliverable Verbs

Weinbach pivots from theoretical critique to practical prescription by advocating a service-oriented lens for domain modeling, where the primary organizing principle is not the static structure of business entities but the dynamic delivery of specific, value-adding services. He contends that traditional approaches often fall into the trap of “noun-centric” modeling, where developers attempt to create comprehensive representations of business objects loaded with every conceivable attribute and behavior, resulting in god classes that violate the single responsibility principle and become impossible to test or modify. Instead, he proposes that models should be constructed around concrete service verbs—”process payment,” “generate invoice,” “validate shipment”—with each model containing only the minimal set of concepts required to fulfill that service’s contract. Through a logistics case study, Weinbach demonstrates how modeling the “track shipment” service yields a streamlined aggregate consisting of a shipment identifier, a sequence of timestamped status events, and a current location, purposefully omitting unrelated concerns such as inventory levels or billing details. This focused approach not only produces cleaner, more maintainable code but also naturally aligns technical boundaries with organizational responsibilities, facilitating clearer communication between development teams and business stakeholders.

The Human Factor: Reducing Cognitive Load and Enhancing Team Autonomy

One of the most compelling arguments Weinbach advances concerns the human dimension of software development. Universal models, by their very nature, require developers to maintain a vast mental map of interrelationships and invariants across the entire system, dramatically increasing cognitive load and the likelihood of errors. Service-oriented, context-bound models, conversely, allow developers to focus their attention on a well-defined subset of the domain, mastering a smaller, more coherent set of concepts and rules. This reduction in cognitive complexity translates directly into improved productivity, fewer defects, and greater job satisfaction. Moreover, when technical boundaries mirror organizational boundaries—such as when the team responsible for order fulfillment owns the order processing context—they gain true autonomy to evolve their domain model in response to business needs without coordinating with unrelated teams, accelerating delivery cycles and fostering a sense of ownership and accountability.

Practical Implementation Strategies: From Analysis to Code

Weinbach concludes his presentation with a comprehensive set of practical guidelines for implementing service-oriented modeling in real-world projects. He recommends beginning with event storming workshops that identify key business events and the services that produce or consume them, rather than starting with entity relationship diagrams. From these events, teams can derive bounded contexts and their associated models, using techniques such as context mapping to document integration patterns between contexts. He demonstrates code examples showing how anti-corruption layers translate between context-specific models when integration is required, preserving the integrity of each bounded context while enabling necessary data flow. Finally, Weinbach addresses the challenging task of communicating these principles to non-technical stakeholders, who may initially resist the idea of “duplicating” data across models. He explains that while information duplication is indeed undesirable, data duplication across different representational contexts is not only acceptable but necessary when those contexts serve fundamentally different purposes.

Links:

PostHeaderIcon (long tweet) How to make ChromeOS work in VirtualBox without Wifi?

Case

How to make ChromeOS work in VirtualBox without Wifi, ie on an ethernet/wired local network, or even offline?

Quick Fix

Shutdown the VM > Select it > Settings > Network > Advanced > Adapter Type > select “Paravirtualized Network (virtio-net)”

PostHeaderIcon (long tweet) Virtual Box / PAE processor

Case

On booting ChromeOS Vanilla within Virtual Box, I got the following error:
[java]This kernel requires the following features not present on the CPU: pae
Unable to boot – please use a kernel appropriate for your CPU.[/java]

(Actually, the problem occured with ChromeOS but may have appened with another system)

Quick Fix

In Virtual Box, select the VM > Settings > Processor > check “Enable PAE/NX”.