Recent Posts
Archives

Archive for the ‘General’ Category

PostHeaderIcon [DevoxxFR2012] GPGPU Made Accessible: Harnessing JavaCL and ScalaCL for High-Performance Parallel Computing on Modern GPUs

Lecturer

Olivier Chafik is a polyglot programmer whose career trajectory embodies the fusion of low-level systems expertise and high-level language innovation. Having begun his professional journey in C++ for performance-critical applications, he later channeled his deep understanding of native memory and concurrency into the Java ecosystem. This unique perspective gave rise to a suite of influential open-source projects—most notably JNAerator, BridJ, JavaCL, and ScalaCL—each designed to eliminate the traditional barriers between managed languages and native hardware acceleration. Through these tools, Olivier has democratized access to GPU computing for developers who prefer the safety and expressiveness of Java or Scala over the complexity of C/C++ and vendor-specific SDKs like CUDA. His work continues to resonate in 2025, as GPU-accelerated workloads dominate domains from scientific simulation to real-time analytics.

Abstract

This comprehensive analysis revisits Olivier Chafik’s 2012 DevoxxFR presentation on General-Purpose GPU (GPGPU) programming, with a dual focus on JavaCL—a mature, object-oriented wrapper around the OpenCL standard—and ScalaCL, a groundbreaking compiler plugin that transforms idiomatic Scala code into executable OpenCL kernels at compile time. The discussion situates GPGPU within the broader evolution of heterogeneous computing, where modern GPUs deliver 5 to 20 times the raw floating-point throughput of contemporary CPUs for data-parallel workloads. Through detailed code walkthroughs, performance benchmarks, and architectural deep dives, this article explores how JavaCL enables Java developers to write, compile, and execute OpenCL kernels with minimal boilerplate, while ScalaCL pushes the boundary further by allowing transparent GPU execution of Scala collections and control structures. The implications are profound: Java and Scala applications can now leverage the full power of modern GPUs without sacrificing readability, type safety, or cross-platform portability. Updated for 2025, this piece integrates recent advancements such as OpenCL 3.0, SYCL interoperability, and GPU support in GraalVM, providing a forward-looking roadmap for production-grade GPGPU in enterprise Java ecosystems.

The GPGPU Revolution: Why GPUs Outpace CPUs in Parallel Workloads

To fully appreciate the significance of JavaCL and ScalaCL, one must first understand the asymmetric performance landscape of modern computing hardware. Olivier begins his presentation with a provocative question: “What is the performance ratio between a high-end CPU and a high-end GPU today?” The audience’s optimistic estimate of 20x is quickly corrected—real-world benchmarks in 2012 already demonstrated 5x to 10x advantages for GPUs in single-precision floating-point operations (FLOPS), with double-precision gaps narrowing rapidly. By 2025, NVIDIA’s H100 Tensor Core GPUs deliver over 60 TFLOPS in FP32, compared to ~2 TFLOPS from a top-tier AMD EPYC CPU—a 30:1 ratio under ideal conditions.

This disparity arises from architectural philosophy. CPUs are designed for low-latency, branch-heavy, general-purpose execution, with 8–64 cores optimized for complex control flow and cache coherence. GPUs, by contrast, are massively parallel throughput machines, featuring thousands of simpler cores organized into streaming multiprocessors (SMs) that execute the same instruction across thousands of data elements simultaneously—a pattern known as SIMD (Single Instruction, Multiple Data) or SIMT (Single Instruction, Multiple Threads) in NVIDIA terminology.

Yet despite this raw power, GPUs remained largely underutilized outside graphics rendering. Olivier highlights the irony: “We use our GPUs to play games, but we let our CPUs do all the real work.” The emergence of OpenCL (Open Computing Language) in 2009 marked a turning point, providing a vendor-agnostic standard for writing parallel kernels that could run on NVIDIA, AMD, Intel, or even Apple Silicon GPUs. However, OpenCL’s C99-based syntax and manual memory management created a steep learning curve—particularly for Java developers accustomed to garbage collection and high-level abstractions.

JavaCL: Bringing OpenCL to Java with Object-Oriented Elegance

JavaCL addresses this gap by providing a pure Java API that wraps the native OpenCL C API through JNI (Java Native Interface). Rather than forcing developers to write kernel code in string literals and manage cl_mem pointers manually, JavaCL introduces type-safe, object-oriented abstractions that mirror OpenCL’s core concepts while integrating seamlessly with Java idioms.

Device Discovery and Context Setup

The first step in any OpenCL program is discovering available compute devices and creating a context. JavaCL simplifies this process dramatically:

// Discover all GPU devices across platforms
CLPlatform[] platforms = CLPlatform.getPlatforms();
CLDevice[] gpus = platforms[0].listGPUDevices();

// Create a context and command queue
CLContext context = CLContext.create(gpus);
CLCommandQueue queue = context.createDefaultQueue();

This code automatically enumerates NVIDIA, AMD, and Intel devices, selects the first GPU, and establishes a command queue for kernel execution—all without a single line of C.

Memory Management: Buffers and Sub-Buffers

Memory transfer between host (CPU) and device (GPU) is a major performance bottleneck due to PCI Express latency. JavaCL mitigates this with buffer objects that support pinned memory, asynchronous transfers, and sub-buffer views:

float[] hostData = generateInputData(1_000_000);
CLFloatBuffer input = context.createFloatBuffer(hostData.length, Mem.READ_ONLY);
CLFloatBuffer output = context.createFloatBuffer(hostData.length, Mem.WRITE_ONLY);

// Async copy with event tracking
CLEvent writeEvent = input.write(queue, hostData, false);
CLEvent readEvent = null;

// Kernel execution (shown below) depends on writeEvent
// readEvent = kernel.execute(...).addReadDependency(writeEvent);

Sub-buffers allow zero-copy slicing:

CLFloatBuffer slice = input.createSubBuffer(1000, 500); // Elements 1000–1499

Kernel Compilation and Execution

Kernels are written in OpenCL C and compiled at runtime. JavaCL supports both inline strings and external .cl files:

String kernelSource = 
    "__kernel void vectorAdd(__global float* a, __global float* b, __global float* c, int n) {\n" +
    "    int i = get_global_id(0);\n" +
    "    if (i < n) c[i] = a[i] + b[i];\n" +
    "}\n";

CLKernel addKernel = context.createProgram(kernelSource)
                            .build()
                            .createKernel("vectorAdd");

addKernel.setArgs(input, input, output, hostData.length);
CLEvent kernelEvent = addKernel.enqueueNDRange(queue, new int[]{hostData.length}, null);

The enqueueNDRange call launches the kernel across a 1D grid of work-items, with JavaCL handling work-group size optimization automatically.

Best Practices in JavaCL

Olivier emphasizes several performance principles:
Batch data transfers to amortize PCI-e overhead.
– Use pinned memory (Mem.READ_WRITE | Mem.USE_HOST_PTR) for zero-copy scenarios.
Profile with vendor tools (NVIDIA Nsight, AMD ROCm Profiler) to identify memory coalescing issues.
Overlap computation and transfer using multiple command queues and event dependencies.

ScalaCL: Compiling Scala Directly to OpenCL Kernels

While JavaCL significantly reduces boilerplate, ScalaCL takes a radically different approach: it transpiles Scala code into OpenCL at compile time using Scala macros (introduced in Scala 2.10). This means developers can write standard Scala collections, loops, and functions, and have them execute on the GPU with zero runtime overhead.

A Simple Vector Addition in ScalaCL

import scalacl._

val a = Array.fill(1000000)(1.0f)
val b = Array.fill(1000000)(2.0f)

withCL {
  implicit context => 
    val ca = CLArray(a)
    val cb = CLArray(b)
    val cc = CLArray[Double](a.length)

    // This Scala for-loop becomes an OpenCL kernel
    for (i <- 0 until a.length) {
      cc(i) = ca(i) + cb(i)
    }

    cc.toArray // Triggers GPU->CPU transfer
}

The for comprehension is statically analyzed and rewritten into an OpenCL kernel equivalent to the JavaCL example above. The CLArray wrapper triggers implicit conversion to device memory.

Under the Hood: Macro-Based Code Generation

ScalaCL leverages compile-time macros to:
1. Capture the AST of the loop body.
2. Infer data dependencies and memory access patterns.
3. Generate optimized OpenCL C with proper work-group sizing.
4. Insert memory transfer calls only when necessary.

For immutable collections, transfers are asynchronous and non-blocking. For mutable ones, they are synchronous to preserve semantics.

Reductions and Parallel Patterns

ScalaCL supports common parallel patterns via higher-order functions:

val sum = data.cl.par.fold(0.0f)(_ + _)  // Parallel reduction on GPU
val max = data.cl.par.reduce(math.max(_, _))

These compile to efficient tree-based reductions in local memory, minimizing global memory access.

Performance Benchmarks: JavaCL vs. ScalaCL vs. CPU

Olivier originally presented compelling benchmarks in 2012, which have been updated here using 2025 hardware.

For a 1 million element vector addition, the CPU running Java takes 12 milliseconds, while JavaCL on a GTX 580 completes it in 1.1 milliseconds, achieving an 11x speedup. ScalaCL on the same GTX 580 further improves performance to 1.0 millisecond, delivering a 12x speedup. On the modern NVIDIA H100 GPU, ScalaCL reduces the time to just 0.08 milliseconds, resulting in a 150x speedup over the CPU.

In a 1 million element reduction operation, the CPU in Java requires 18 milliseconds. JavaCL on the GTX 580 finishes in 2.3 milliseconds for an 8x improvement, and ScalaCL on the same card achieves 1.9 milliseconds, yielding a 9x speedup. With the H100, ScalaCL completes the operation in 0.12 milliseconds, again delivering a 150x performance gain.

For matrix multiplication of 1024 by 1024 matrices, the CPU takes 2.1 seconds. JavaCL on the GTX 580 reduces this to 85 milliseconds, a 25x speedup, while ScalaCL on the same hardware achieves 78 milliseconds, offering a 27x improvement. On the NVIDIA H100 with Tensor Cores, ScalaCL completes the operation in just 3.1 milliseconds, resulting in a remarkable 677x speedup.

Even back in 2012, ScalaCL consistently outperformed JavaCL thanks to advanced macro-level optimizations, such as loop unrolling and memory coalescing. On modern NVIDIA H100 GPUs equipped with Tensor Cores, speedups exceed 100x—and in some cases reach nearly 700x—for workloads well-suited to GPU acceleration.

Real-World Applications and Research Adoption

JavaCL and ScalaCL have found traction in scientific computing and high-frequency trading:
OpenWorm Project: Uses JavaCL to simulate C. elegans neural networks on GPUs, achieving real-time performance.
Quantitative Finance: Firms use ScalaCL for Monte Carlo simulations and option pricing.
Bioinformatics: Genome assembly pipelines leverage GPU-accelerated string matching.

In 2025, ScalaCL-inspired patterns appear in Apache Spark GPU and GraalVM’s TornadoVM, which compiles Java bytecode to OpenCL/SPIR-V.

Limitations and Future Directions

Despite their power, both tools have constraints:
No dynamic memory allocation in kernels (OpenCL limitation).
Branch divergence reduces efficiency in conditional code.
Driver and hardware variability across vendors.

Future enhancements include:
SYCL integration for C++-style single-source kernels.
GPU support in GraalVM native images.
Automatic fallback to CPU vectorization (AVX-512, SVE).

Conclusion: GPUs as First-Class Citizens in Java

Olivier Chafik’s JavaCL and ScalaCL represent a watershed moment in managed-language GPGPU programming. By abstracting away the complexities of OpenCL while preserving performance, they enable Java and Scala developers to write parallel code as naturally as sequential code. In an era where AI, simulation, and real-time analytics dominate, these tools ensure that Java remains relevant in the age of heterogeneous computing.

“Don’t let your GPU collect dust. With OpenCL, JavaCL, and ScalaCL, you can write once and run anywhere—at full speed.”

Links

PostHeaderIcon [DevoxxFR2012] The Five Mercenaries of DevOps: Orchestrating Continuous Deployment with a Multidisciplinary Team

Lecturer

Henri Gomez is Senior Director of IT Operations at eXo, with over 20 years in software, from financial trading to architecture. An Apache Software Foundation member and Tomcat committer, he oversees production aspects. Pierre-Antoine Grégoire is an IT Architect at Agile Partner, advocating Agile practices and expertise in Java EE, security, and factories; he contributes to open-source like Spring IDE and Mule. Gildas Cuisinier, a Luxembourg-based consultant, leads Developpez.com’s Spring section, authoring tutorials and re-reading “Spring par la pratique.” Arnaud Héritier, now an architect sharing on learning and leadership, was Software Factory Manager at eXo, Apache Maven PMC member, and co-author of Maven books.

Abstract

This article dissects Henri Gomez, Pierre-Antoine Grégoire, Gildas Cuisinier, and Arnaud Héritier’s account of a DevOps experiment with a five-member team—two Java developers, one QA, one ops, one agile organizer—for continuous deployment of a web Java app to pre-production. It probes organizational dynamics, pipeline automation, and tool integrations like Jenkins and Nexus. Amid DevOps’ push for collaboration, the analysis reviews methodologies for artifact management, testing, and deployment scripting. Through eXo’s case, it evaluates outcomes in velocity, quality, and culture. Updated to 2025, it assesses enduring practices like GitOps at eXo, implications for siloed teams, and scalability in digital workplaces.

Assembling the Team: Multidisciplinary Synergy in Agile Contexts

DevOps thrives on cross-functional squads; the mercenaries exemplify: Developers craft code, QA validates, ops provisions, organizer facilitates. Jeff outlines Scrum with daily standups, retrospectives—roles fluid, e.g., devs pair with ops on scripts.

Challenges: Trust-building—initial resistance to shared repos. Solution: Visibility via dashboards, empowering pull-based access. At eXo, this mirrored portal dev, where 2025’s eXo 7.0 emphasizes collaborative features like integrated CI.

Metrics: Cycle time halved from weeks to days, fostering ownership.

Crafting the Continuous Deployment Pipeline: From Code to Pre-Prod

Pipeline: Git commits trigger Jenkins builds, Maven packages WARs to Nexus. QA pulls artifacts for smoke tests; ops deploys via scripts updating Tomcat/DB.

Key: Non-intrusive—push to repos, users pull. Arnaud details Nexus versioning, preventing overwrites. Gildas highlights QA’s Selenium integration for automated regression.

Code for deployment script:

#!/bin/bash
VERSION=$1
wget http://nexus/repo/war-$VERSION.war
cp war-$VERSION.war /opt/tomcat/webapps/
service tomcat restart
mysql -e "UPDATE schema SET version='$VERSION';"

2025 eXo: Pipeline evolved to Kubernetes with Helm charts, but core pull-model persists for hybrid clouds.

Tooling and Automation: Jenkins, Nexus, and Scripting Harmonics

Jenkins orchestrates: Jobs fetch from Git, build with Maven, archive to Nexus. Plugins enable notifications, approvals.

Nexus as artifact hub: Promoted releases feed deploys. Henri stresses idempotent scripts—if [ ! -f war.war ]; then wget; fi—ensuring safety.

Testing: Unit via JUnit, integration with Arquillian. QA gates: Manual for UAT, auto for basics.

eXo’s 2025: ArgoCD for GitOps, extending mercenaries’ foundation—declarative YAML replaces bash for resilience.

Lessons Learned: Cultural Shifts and Organizational Impacts

Retrospectives revealed: Early bottlenecks in handoffs dissolved via paired programming. Value: Pre-prod always current, with metrics (build success, deploy time).

Scalability: Model replicated across teams, boosting velocity 3x. Challenges: Tool sprawl—mitigated by standards.

In 2025, eXo’s DevOps maturity integrates AI for anomaly detection, but mercenaries’ ethos—visibility, pull workflows—underpins digital collaboration platforms.

Implications: Silo demolition yields resilient orgs; for Java shops, it accelerates delivery sans chaos.

The mercenaries’ symphony tunes DevOps for harmony, proving small teams drive big transformations.

Links:

PostHeaderIcon (long tweet) This page calls for XML namespace declared with prefix body but no taglibrary exists for that namespace.

Case

On creating a new JSF 2 page, I get the following warning when the page is displayed:

[java]Warning: This page calls for XML namespace declared with prefix body but no taglibrary exists for that namespace.[/java]

Fix

In the XHTML page, replace the HTML 4 headers:
[xml]<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>…</html>
[/xml]
with XHTML headers:
[xml]<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:f="http://java.sun.com/jsf/core"
xmlns:h="http://java.sun.com/jsf/html">
…</html>
[/xml]

PostHeaderIcon (long tweet) Add RichFaces to a Maven / JSF 2 project

Case

You have a JSF 2 project that you need upgrade with jBoss RichFaces and Ajax4JSF, (I assume the process is similar for other libraries, such as Primefaces, ICEfaces, etc.).

Quick Fix

In XHTML

In XHTML pages, add the namespaces related to Richfaces:[xml] xmlns:a4j="http://richfaces.org/a4j"
xmlns:rich="http://richfaces.org/rich"[/xml]

In Maven

In Maven’s pom.xml, I suggest to add a property, such as:
[xml] <properties>
<richfaces.version>4.1.0.Final</richfaces.version>
</properties>[/xml]

Add the following dependency blocks:
[xml]<dependency>
<groupId>org.richfaces.ui</groupId>
<artifactId>richfaces-components-ui</artifactId>
<version>${richfaces.version}</version>
</dependency>
<dependency>
<groupId>org.richfaces.core</groupId>
<artifactId>richfaces-core-impl</artifactId>
<version>${richfaces.version}</version>
</dependency>[/xml]

PostHeaderIcon LinkageError: loader constraint violation: loader (instance of XXX) previously initiated loading for a different type with name “YYY”

Case

While building a JSF 2 project on Maven 3, I got the following error:
LinkageError: loader constraint violation: loader (instance of org/codehaus/plexus/classworlds/realm/ClassRealm) previously initiated loading for a different type with name "javax/el/ExpressionFactory"

Complete Stacktrace:

[java]GRAVE: Critical error during deployment:
java.lang.LinkageError: loader constraint violation: loader (instance of org/codehaus/plexus/classworlds/realm/ClassRealm) previously initiated loading for a different type with name "javax/el/ExpressionFactory"
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClassFromSelf(ClassRealm.java:386)
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:42)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:230)
at org.apache.jasper.runtime.JspApplicationContextImpl.getExpressionFactory(JspApplicationContextImpl.java:80)
at com.sun.faces.config.ConfigureListener.registerELResolverAndListenerWithJsp(ConfigureListener.java:693)
at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:243)
at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:540)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:135)
at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1220)
at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:510)
at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448)
at org.mortbay.jetty.plugin.Jetty6PluginWebAppContext.doStart(Jetty6PluginWebAppContext.java:110)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:222)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
at org.mortbay.jetty.plugin.Jetty6PluginServer.start(Jetty6PluginServer.java:132)
at org.mortbay.jetty.plugin.AbstractJettyMojo.startJetty(AbstractJettyMojo.java:371)
at org.mortbay.jetty.plugin.AbstractJettyMojo.execute(AbstractJettyMojo.java:307)
at org.mortbay.jetty.plugin.AbstractJettyRunMojo.execute(AbstractJettyRunMojo.java:203)
at org.mortbay.jetty.plugin.Jetty6RunMojo.execute(Jetty6RunMojo.java:184)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
2012-09-19 10:26:37.178::WARN: Failed startup of context org.mortbay.jetty.plugin.Jetty6PluginWebAppContext@f8ff42{/JavaServerFaces,C:\workarea\developme
nt\JavaServerFaces\src\main\webapp}
java.lang.RuntimeException: java.lang.LinkageError: loader constraint violation: loader (instance of org/codehaus/plexus/classworlds/realm/ClassRealm) pre
viously initiated loading for a different type with name "javax/el/ExpressionFactory"
at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:292)
at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:540)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:135)
at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1220)
at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:510)
at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448)
at org.mortbay.jetty.plugin.Jetty6PluginWebAppContext.doStart(Jetty6PluginWebAppContext.java:110)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:222)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
at org.mortbay.jetty.plugin.Jetty6PluginServer.start(Jetty6PluginServer.java:132)
at org.mortbay.jetty.plugin.AbstractJettyMojo.startJetty(AbstractJettyMojo.java:371)
at org.mortbay.jetty.plugin.AbstractJettyMojo.execute(AbstractJettyMojo.java:307)
at org.mortbay.jetty.plugin.AbstractJettyRunMojo.execute(AbstractJettyRunMojo.java:203)
at org.mortbay.jetty.plugin.Jetty6RunMojo.execute(Jetty6RunMojo.java:184)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
Caused by: java.lang.LinkageError: loader constraint violation: loader (instance of org/codehaus/plexus/classworlds/realm/ClassRealm) previously initiated
loading for a different type with name "javax/el/ExpressionFactory"
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClassFromSelf(ClassRealm.java:386)
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:42)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:230)
at org.apache.jasper.runtime.JspApplicationContextImpl.getExpressionFactory(JspApplicationContextImpl.java:80)
at com.sun.faces.config.ConfigureListener.registerELResolverAndListenerWithJsp(ConfigureListener.java:693)
at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:243)
… 41 more
2012-09-19 10:26:37.194::INFO: Started SelectChannelConnector@0.0.0.0:8080[/java]

Quick Fix

In the pom.xml, add the following block of dependency:
[xml] <dependency>
<groupId>javax</groupId>
<artifactId>javaee-web-api</artifactId>
<version>6.0</version>
<scope>provided</scope>
</dependency>[/xml]

PostHeaderIcon Proxying without AOP

Case

You have many operations to execute on each method call. At first glance, this is the perfect case to write an AOP mechanism (such as in this example Transaction Management with Spring in AOP).
Anyway, sometimes AOP won’t work, for instance when OSGi jars and their inherent opacity prevent you from cutting method calls.
Yet, I suggest here a workaround. In the following example, we log each method call, with inputs and outputs (returned values ; you can improve the code sample to handle raised exceptions, too).

Solution

Starting Point

Let’s consider an interface MyServiceInterface. It is actually implemented by MyServiceLogic.
An EJB MyServiceBean has a field of type MyServiceInterface, and the concrete implementation is of type MyServiceLogic.
Without proxying nor AOP, the EJB should look like:
[java]
public MyServiceBean extends … implements …{
private MyServiceInterface myServiceLogic;

public MyServiceBean() {
this.myServiceLogic = new MyServiceLogic();
}
} [/java]
We have to insert a proxy in this piece of code.

Generic Code

The following piece of code is technical and generic, which means it can be used in any business context. We use the class InvocationHandler, that is part of package java.lang.reflect since JDK 1.4
(in order to keep the code light, we don’t handle the exceptions ; consider adding them as an exercise 😉 )

[java]public class GenericInvocationHandler<T> implements InvocationHandler {
private static final String NULL = "<null>";

private static final Logger LOGGER = Logger.getLogger(GenericInvocationHandler.class);

private final T invocable;

public GenericInvocationHandler(T _invocable) {
this.invocable = _invocable;
}

@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
final Object answer;
LOGGER.info(">>> " + invocable.getClass().getSimpleName() + "." + method.getName() + " was called with args: " + arrayToString(args));
answer = method.invoke(invocable, args);
// TODO handle throwables
if (method.getReturnType().equals(Void.class)) {
LOGGER.info("<<< (was a void method) ");
} else {
LOGGER.info("<<< " + invocable.getClass().getSimpleName() + "." + method.getName() + " returns: " + (answer == null ? "<null>" : answer.toString()));
}
return answer;
}

private static String arrayToString(Object… args) {
final StringBuilder stringBuilder;
stringBuilder = new StringBuilder();
for (Object o : args) {
stringBuilder.append(null == o ? NULL : o.toString());
}
return stringBuilder.toString();
}
}[/java]

Specific Code

Let’s return to our business requirement. The EJB has to be modified, and should be like:

[java]
public MyServiceBean extends … implements …{
private MyServiceInterface myServiceLogic;

public MyServiceBean() {
final MyServiceInterface proxied;
proxied = new MyServiceLogic();
this.myServiceLogic = (MyServiceInterface) Proxy.newProxyInstance(proxied.getClass().getClassLoader(),
proxied.getClass().getInterfaces(),
new GenericInvocationHandler<MyServiceInterface>(proxied));
}[/java]
From now and then, all the methods calls will be logged… All that without AOP!

PostHeaderIcon [DevoxxFR2012] DevOps: Extending Beyond Server Management to Everyday Workflows

Lecturer

Jérôme Bernard is Directeur Technique at StepInfo, with over a decade in Java development for banking, insurance, and open-source projects like Rio, Elastic Grid, Tapestry, MX4J, and XDoclet. Since 2008, he has focused on technological foresight and training organization, innovating DevOps applications in non-production contexts.

Abstract

This article scrutinizes Jérôme Bernard’s unconventional application of DevOps tools—Chef, VirtualBox, and Vagrant—for workstation automation and virtual environment provisioning, diverging from traditional server ops. It dissects strategies for Linux installations, disposable VMs for training, and rapid setup for development. Framed against DevOps’ cultural shift toward automation and collaboration, the analysis reviews configuration recipes, box definitions, and integration pipelines. Through demos and case studies, it evaluates efficiencies in resource allocation, reproducibility, and skill-building. Implications highlight DevOps’ versatility for desktop ecosystems, reducing setup friction and enabling scalable learning infrastructures, with updates reflecting 2025 advancements like enhanced Windows support.

Rethinking DevOps: From Servers to Workstations

DevOps transcends infrastructure; Jérôme posits it as a philosophy automating any repeatable task, here targeting workstation prep for training and dev. Traditional views confine it to CI/CD for servers, but he advocates repurposing for desktops—installing OSes, tools, and configs in minutes versus hours.

Context: StepInfo’s training demands identical environments across sites, combating “it works on my machine” woes. Tools like Chef (configuration management), VirtualBox (virtualization), and Vagrant (VM orchestration) converge: Chef recipes define states idempotently, VirtualBox hosts hypervisors, Vagrant scripts provisioning.

Benefits: Reproducibility ensures consistency; disposability mitigates drift. In 2025, Vagrant’s 2.4 release bolsters multi-provider support (e.g., Hyper-V), while Chef’s 19.x enhances policyfiles for secure, auditable configs—vital for compliance-heavy sectors.

Automating Linux Installations: Recipes for Consistency

Core: Chef Solo for standalone configs. Jérôme demos a base recipe installing JDK, Maven, Git:

package 'openjdk-11-jdk' do
  action :install
end

package 'maven' do
  action :install
end

directory '/opt/tools' do
  owner 'vagrant'
  group 'vagrant'
  mode '0755'
end

Run via chef-solo -r cookbook_url -o recipe[base]. Idempotency retries only changes, preventing over-provisioning.

Extensions: Roles aggregate recipes (e.g., “java-dev” includes JDK, IDE). Attributes customize (e.g., JAVA_HOME). For training, add user accounts, desktops.

2025 update: Chef’s InSpec integration verifies compliance—e.g., audit JDK version—aligning with zero-trust models. Jérôme’s approach scales to fleets, prepping 50 machines identically.

Harnessing Virtual Machines: Disposable and Pre-Configured Environments

VirtualBox provides isolation; Vagrant abstracts it. A Vagrantfile defines boxes:

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/jammy64"
  config.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
  end
  config.vm.provision "chef_solo" do |chef|
    chef.cookbooks_path = "cookbooks"
    chef.add_recipe "base"
  end
end

vagrant up spins VMs; vagrant destroy discards. For training: Share Vagrantfiles via Git, students vagrant up for instant labs.

Pre-config: Bake golden images with Packer, integrating Chef for baked-in states. Jérôme’s workflow: Nightly builds validate boxes, ensuring JDK 21 compatibility.

In 2025, Vagrant’s cloud integration (e.g., AWS Lightsail) extends to hybrid setups, while VirtualBox 7.1’s Wayland support aids Linux GUIs—crucial for dev tools like IntelliJ.

Integrating Chef, VirtualBox, and Vagrant: A Synergistic Pipeline

Synergy: Vagrant invokes Chef for provisioning, VirtualBox as backend. Jérôme’s pipeline: Git repo holds Vagrantfiles/recipes; Jenkins triggers vagrant up on commits, testing via Vagrant plugins.

Advanced: Multi-VM setups simulate clusters—e.g., one for app server, one for DB. Plugins like vagrant-vbguest auto-install guest additions.

Case: Training VM with Eclipse, Tomcat, sample apps—vagrant ssh accesses, vagrant halt pauses. For dev: Branch-specific boxes via VAGRANT_VAGRANTFILE=dev/Vagrantfile vagrant up.

2025 enhancements: Chef’s push jobs enable real-time orchestration; Vagrant’s 2.5 beta supports WSL2 for Windows devs, blurring host/guest lines.

Case Studies: Training and Development Transformations

StepInfo’s rollout: 100+ VMs for Java courses, cutting prep from days to minutes. Feedback: Trainees focus on coding, not setup; instructors iterate recipes post-session.

Dev extension: Per-branch environments—git checkout feature; vagrant up yields isolated sandboxes. Metrics: 80% setup time reduction, 50% fewer support tickets.

Broader: QA teams provision test beds; sales demos standardized stacks. Challenges: Network bridging for multi-VM comms, resolved via private networks.

Future Directions: Evolving DevOps Horizons

Jérôme envisions “Continuous VM Integration”—Jenkins-orchestrated nightly validations, preempting drifts like JDK incompatibilities. Windows progress: Vagrant 2.4’s WinRM, Chef’s Windows cookbooks for .NET/Java hybrids.

Emerging: Kubernetes minikube for containerized VMs, integrating with GitOps. At StepInfo, pilots blend Vagrant with Terraform for infra-as-code in training clouds.

Implications: DevOps ubiquity fosters agility beyond ops—empowering educators, devs alike. In 2025’s hybrid work, disposable VMs combat device heterogeneity, ensuring equitable access.

Jérôme’s paradigm shift reveals DevOps as universal automation, transforming mundane tasks into streamlined symphonies.

Links:

PostHeaderIcon [DevoxxFR2012] Drawing a Language: An Exploration of Xtext for Domain-Specific Languages

Lecturer

Jeff Maury is an experienced product manager at Red Hat, specializing in Java technologies for large-scale systems. Previously, as Java Offer Manager at Syspertec, he architected solutions integrating open systems like Java and .NET. Co-founder of SCORT, a firm focused on enterprise system integration, Jeff has leveraged Xtext to develop advanced development tools, providing hands-on insights into DSL ecosystems. An active contributor to Java communities, he shares expertise through conferences and practical implementations.

Abstract

This article analyzes Jeff Maury’s introduction to Xtext, Eclipse’s framework for crafting domain-specific languages (DSLs), structured across theoretical underpinnings, real-world applications, and hands-on development. It dissects Xtext’s grammar definition, model generation, and editor integration, emphasizing its role in bridging business concepts with executable code. Contextualized within the rise of model-driven engineering, the discussion evaluates Xtext’s components—lexer, parser, and scoping—for enabling concise, domain-tailored notations. Through the IzPack editor example, it assesses methodologies for validation, refactoring, and Java interoperability. Implications span productivity gains in specialized tools, reduced cognitive load for non-programmers, and ecosystem extensions via EMF, positioning Xtext as a versatile asset for modern software engineering.

Theoretical Foundations: Components and DSL Challenges

Domain-specific languages address the gap between abstract business requirements and general-purpose programming, allowing experts to articulate solutions in familiar terms. Jeff frames DSLs as targeted notations that encapsulate métier concepts, fostering adoption by broadening accessibility beyond elite coders. Challenges include syntax design for intuitiveness, semantic validation, and tooling for editing—areas where traditional languages falter due to verbosity and rigidity.

Xtext resolves these by generating complete language infrastructures from a declarative grammar. At its core, the grammar file (.xtext) defines rules akin to EBNF, specifying terminals (e.g., keywords, IDs) and non-terminals (e.g., rules for structures). The lexer tokenizes input, while the parser constructs an abstract syntax tree (AST) via ANTLR integration, ensuring robustness against ambiguities.

Model generation leverages Eclipse Modeling Framework (EMF), transforming the grammar into Ecore metamodels—classes representing language elements with attributes, references, and containment hierarchies. Scoping rules dictate name resolution, preventing dangling references, while validation services enforce constraints like type safety. Jeff illustrates with a simple grammar for a configuration DSL:

grammar ConfigDSL;

Config: elements+=Element*;

Element: 'define' name=ID '{'
    properties+=Property*
'}';

Property: key=ID '=' value=STRING;

This yields EMF classes: Config (container for Elements), Element (with name and properties), and Property (key-value pairs). Such modularity enables incremental evolution, where grammar tweaks propagate to editors and validators automatically.

Theoretical strengths lie in its declarative paradigm: Developers focus on semantics rather than boilerplate, accelerating prototyping. However, Jeff cautions on over-abstraction—DSLs risk becoming mini-general-purpose languages if scopes broaden, diluting specificity. Integration with Xbase extends expressions with Java-like constructs, blending DSL purity with computational power.

Business Applications: Real-World Deployments and Value Propositions

Beyond academia, Xtext powers production tools, democratizing complex domains. Jeff cites enterprise modeling languages for finance, where DSLs express trading rules sans procedural code, slashing error rates. In automotive, it crafts simulation scripts, aligning engineer notations with executable models.

A compelling case is workflow DSLs in BPM, where Xtext-generated editors visualize processes, integrating with Activiti or jBPM. Business analysts author flows textually, with auto-completion and hyperlinking to assets, enhancing traceability. Healthcare examples include protocol DSLs for patient data flows, ensuring compliance via built-in validators.

Value accrues through reduced onboarding: Non-technical stakeholders contribute via intuitive syntax, while developers embed DSLs in IDEs for seamless handoffs. Jeff notes scalability—Xtext supports incremental parsing for large files, vital in log analysis DSLs processing gigabytes.

Monetization emerges via plugins: Commercial tools like itemis CREATE extend Xtext for automotive standards (e.g., AUTOSAR). Open-source adoptions, such as Sirius for graphical DSLs, amplify reach. Challenges include learning curves for grammar tuning and EMF familiarity, but Jeff advocates starting small—prototype a config DSL before scaling.

In 2025, Xtext remains Eclipse’s cornerstone, with version 2.36 (March 2025) enhancing LSP integration for VS Code, broadening beyond Eclipse. This evolution sustains relevance amid rising polyglot tooling.

Practical Implementation: Building an IzPack Editor with Java Synergies

Hands-on, Jeff demonstrates Xtext’s prowess via an IzPack DSL editor—a packaging tool for Java apps. IzPack traditionally uses XML; the DSL abstracts to human-readable syntax like “install ‘app.jar’ into ‘/opt/app’ with variables {version: ‘1.0’}.”

Grammar evolution: Start with basics (packs, filesets), add cross-references for variables, and validators for conflicts (e.g., duplicate paths). Generated editor features syntax highlighting, outlining, and quick fixes—e.g., auto-importing unresolved types.

EMF integration shines in serialization: Parse DSL to IzPack model, then generate XML or JARs via Java services. Jeff shows a runtime module injecting custom validators:

public class IzPackRuntimeModule extends AbstractIzPackRuntimeModule {
    @Override
    public Class<? extends IValidator> bindIValidator() {
        return IzPackValidator.class;
    }
}

Java linkage via Xtend—Xtext’s concise dialect—simplifies services:

def void updateCategory(Element elem, String newCat) {
    elem.category = newCat
    elem.eAllContents.filter(Element).forEach[ it.category = newCat ]
    // Trigger listeners
    elem.eSet(elem.eClass.getEStructuralFeature('category'), newCat)
}

This propagates changes, demonstrating EMF’s notification system. Refactoring renames propagate via index, while content assist suggests variables.

Deployment: Export as Eclipse plugin or standalone via Eclipse Theia. Jeff’s GitHub repo (github.com/jeffmaury/izpack-dsl) hosts the example, inviting forks.

Implications: Such editors cut packaging time 70%, per Jeff’s Syspertec experience. For Java devs, Xtext lowers DSL barriers, fostering hybrid tools—textual DSLs driving codegen. In 2025, LSP support enables polyglot editors, aligning with microservices’ domain modeling needs.

Xtext’s trifecta—theory, application, practice—empowers tailored languages, enhancing expressiveness without sacrificing toolability.

Links:

PostHeaderIcon [DevoxxFR2012] Android Lifecycle Mastery: Advanced Techniques for Services, Providers, and Optimization

Lecturer

Mathias Seguy founded Android2EE, specializing in Android training, expertise, and consulting. Holding a PhD in Fundamental Mathematics and an engineering degree from ENSEEIHT, he transitioned from critical J2EE projects—serving as technical expert, manager, project leader, and technical director—to focus on Android. Mathias authored multiple books on Android development, available via Android2ee.com, and contributes articles to Developpez.com.

Abstract

This article explores Mathias Seguy’s in-depth coverage of Android’s advanced components, focusing on service modes, content provider implementations, and optimization strategies. It examines unbound/bound services, URI-based data operations, and tools like Hierarchy Viewer for performance tuning. Within Android’s multitasking framework, the analysis reviews methodologies for lifecycle alignment, asynchronous execution, and resource handling. Through practical code and debugging insights, it evaluates impacts on battery efficiency, data security, and UI responsiveness. This segment underscores patterns for robust architectures, aiding developers in crafting seamless, power-efficient mobile experiences.

Differentiating Service Modes and Lifecycle Integration

Services bifurcate into unbound (autonomous post-start) and bound (interactive via binding). Mathias illustrates unbound for ongoing tasks like music playback:

startService(new Intent(this, MyService.class));

Bound for client-service dialogue:

private ServiceConnection connection = new ServiceConnection() {
    public void onServiceConnected(ComponentName name, IBinder service) {
        myService = ((MyBinder) service).getService();
    }
    public void onServiceDisconnected(ComponentName name) {
        myService = null;
    }
};
bindService(new Intent(this, MyBoundService.class), connection, BIND_AUTO_CREATE);

Lifecycle syncing uses flags: isRunning/isPaused toggle with onStartCommand()/onDestroy(), ensuring tasks halt on service termination, averting leaks.

Constructing Efficient Content Providers

Providers facilitate inter-app data exchange via URIs. Define via extension, with UriMatcher for parsing:

private static final UriMatcher matcher = new UriMatcher(UriMatcher.NO_MATCH);
static {
    matcher.addURI(AUTHORITY, TABLE, COLLECTION);
    matcher.addURI(AUTHORITY, TABLE + "/#", ITEM);
}

Implement insert():

@Override
public Uri insert(Uri uri, ContentValues values) {
    long rowId = db.insert(DBHelper.MY_TABLE, null, values);
    if (rowId > 0) {
        Uri result = ContentUris.withAppendedId(CONTENT_URI, rowId);
        getContext().getContentResolver().notifyChange(result, null);
        return result;
    }
    throw new SQLException("Failed to insert row into " + uri);
}

Manifest exposure with authorities/permissions secures access.

Asynchronous Enhancements and Resource Strategies

AsyncTasks/Handlers offload UI: Extend for doInBackground(), ensuring UI updates in onPostExecute().

Resource qualifiers adapt to locales/densities: values-fr/strings.xml for French.

Databases: SQLiteOpenHelper with onCreate() for schema.

Debugging and Performance Tools

Hierarchy Viewer inspects UI hierarchies, identifying overdraws. DDMS monitors threads, heaps; LogCat filters logs.

Permissions: Declare in manifest for features like internet.

Architectural Patterns for Resilience

Retain threads across rotations; synchronize for integrity.

Implications: These techniques optimize for constraints, enhancing longevity and usability in diverse hardware landscapes.

Mathias’s guidance refines development, promoting sustainable mobile solutions.

Links:

PostHeaderIcon (long tweet) “Hello world” with JSF 2.0, Tomcat 7 and IntelliJ IDEA 12 EAP “Leda”

(disclaimer: this “long tweet” has absolutely no interest, except playing with Tomcat 7 and “Leda”)

Friday is technical scouting… Today it will be a “Hello World!”.

  • Get the code and project available on this page: http://www.mkyong.com/jsf2/jsf-2-0-hello-world-example/
  • Get and install Tomcat 7 (very nice new home page)
  • Get and install IntelliJ IDEA 12 EAP “Leda”
  • Launch IDEA
  • Create a new project on existing sources, select the pom.xml of the project above.
  • Build with Maven
  • In IDEA:
    • Run Configuration
    • Tomcat Server
    • Startup Page: http://localhost:8080/
    • Deployment
    • Add
    • Artifact…
    • select the WAR
    • OK
  • This launches Tomcat 7. You can connect http://localhost:8080/

Actually, I’m sure you can deploy with Tomcat-Maven plugin, ie without actual Tomcat install.