Recent Posts
Archives

Archive for the ‘General’ Category

PostHeaderIcon [DevoxxFR2013] Clean JavaScript? Challenge Accepted: Strategies for Maintainable Large-Scale Applications

Lecturer

Romain Linsolas is a Java developer with over two decades of experience, passionate about technical innovation. He has worked at the CNRS on an astrophysics project, as a consultant at Valtech, and as a technical leader at Société Générale. Romain is actively involved in the developpez.com community as a writer and moderator, and he focuses on continuous integration principles to automate and improve team processes. Julien Jakubowski is a consultant and lead developer at OCTO Technology, with a decade of experience helping teams deliver high-quality software efficiently. He co-founded the Ch’ti JUG in Lille and has organized the Agile Tour Lille for two years.

Abstract

This article analyzes Romain Linsolas and Julien Jakubowski’s exploration of evolving JavaScript from rudimentary scripting to robust, large-scale application development. By dissecting historical pitfalls and modern solutions, the discussion evaluates architectural patterns, testing frameworks, and automation tools that enable clean, maintainable code. Contextualized within the shift from server-heavy Java applications to client-side dynamism, the analysis assesses methodologies for avoiding common errors, implications for developer productivity, and challenges in integrating diverse ecosystems. Through practical examples, it illustrates how JavaScript can support complex projects without compromising quality.

Historical Pitfalls and the Evolution of JavaScript Practices

JavaScript’s journey from a supplementary tool in the early 2000s to a cornerstone of modern web applications reflects broader shifts in user expectations and technology. Initially, developers like Romain and Julien used JavaScript for minor enhancements, such as form validations or visual effects, within predominantly Java-based server-side architectures. A typical 2003 example involved inline scripts to check input fields, turning them red on errors and preventing form submission. However, this approach harbored flaws: global namespace pollution from duplicated function names across files, implicit type coercions leading to unexpected concatenations instead of additions (e.g., “100” + 0.19 yielding “1000.19”), and public access to supposedly private variables, breaking encapsulation.

These issues stem from JavaScript’s design quirks, often labeled “dirty” due to surprising behaviors like empty array additions resulting in strings or NaN (Not a Number). Romain’s demonstrations, inspired by Gary Bernhardt’s critiques, highlight arithmetic anomalies where [] + {} equals “[object Object]” but {} + [] yields 0. Such inconsistencies, while entertaining, pose real risks in production code, as seen in scope leakage where loop variables overwrite each other, printing values only 10 times instead of 100.

The proliferation of JavaScript-driven applications, fueled by innovations from Gmail and Google Docs, necessitated more code—potentially 100,000 lines—demanding structured approaches. Early reliance on frameworks like Struts for server logic gave way to client-side demands for offline functionality and instant responsiveness, compelling developers to confront JavaScript’s limitations head-on.

Architectural Patterns for Scalable Code

To tame JavaScript’s chaos, modular architectures inspired by Model-View-Controller (MVC) patterns emerge as key. Frameworks like Backbone.js, AngularJS, and Ember.js facilitate separation of concerns: models handle data, views manage UI, and controllers orchestrate logic. For instance, in a beer store application, an MVC setup might use Backbone to define a Beer model with validation, a BeerView for rendering, and a controller to handle additions.

Modularization via patterns like the Module Pattern encapsulates code, preventing global pollution. A counter example encapsulates a private variable:

var Counter = (function() {
    var privateCounter = 0;
    function changeBy(val) {
        privateCounter += val;
    }
    return {
        increment: function() {
            changeBy(1);
        },
        value: function() {
            return privateCounter;
        }
    };
})();

This ensures privacy, unlike direct access in naive implementations. Advanced libraries like RequireJS implement Asynchronous Module Definition (AMD), loading dependencies on demand to avoid conflicts.

Expressivity is boosted by frameworks like CoffeeScript, which compiles to JavaScript with cleaner syntax, or Underscore.js for functional utilities. Julien’s analogy to appreciating pungent cheese after initial aversion captures the learning curve: mastering these tools reveals JavaScript’s elegance.

Testing and Automation for Reliability

Unit testing, absent in early practices, is now feasible with frameworks like Jasmine, adopting Behavior-Driven Development (BDD). Specs describe behaviors clearly:

describe("Beer addition", function() {
    it("should add a beer with valid name", function() {
        var beer = new Beer({name: "IPA"});
        expect(beer.isValid()).toBe(true);
    });
});

Tools like Karma run tests in real browsers, while Istanbul measures coverage. Automation integrates via Maven, Jenkins, or SonarQube, mirroring Java workflows. Violations from JSLint or compilation errors from Google Closure Compiler are flagged, ensuring syntax integrity.

Yeoman, combining Yo (scaffolding), Grunt (task running), and Bower (dependency management), streamlines setup. IDEs like IntelliJ or WebStorm provide seamless support, with Chrome DevTools for debugging.

Ongoing Challenges and Future Implications

Despite advancements, integration remains complex: combining MVC frameworks with testing suites requires careful orchestration, often involving custom recipes. Perennial concerns include framework longevity—Angular vs. Backbone—and team upskilling, demanding substantial training investments.

The implications are profound: clean JavaScript enables scalable, responsive applications, bridging Java developers into full-stack roles. By avoiding pitfalls through patterns and tools, projects achieve maintainability, reducing long-term costs. However, the ecosystem’s youth demands vigilance, as rapid evolutions could obsolete choices.

In conclusion, JavaScript’s transformation empowers developers to tackle ambitious projects confidently, blending familiarity with innovation for superior outcomes.

Links:

PostHeaderIcon MultiException[java.lang.RuntimeException: Error scanning file]

Case

I run a project with JSF 2 / PrimeFaces 5 (BTW: it rocks!) / Spring 4 / Jetty 9 / Java 8:
[java]MultiException java.lang.RuntimeException: Error scanning file SummerBean.class, java.lang.RuntimeException: Error scanning entry …/SummerService.class from jar file:/…/spring-tier-1.0-SNAPSHOT.jar, java.lang.RuntimeException: Error scanning entry …/SummerServiceImpl.class from jar file:/…/spring-tier-1.0-SNAPSHOT.jar
at org.eclipse.jetty.annotations.AnnotationConfiguration.scanForAnnotations(AnnotationConfiguration.java:530)[/java]

Explanation

The error occurs because of a conflict on the JARs of ASM.

Fix

You have to override Jetty’s dependencies to ASM.
In Maven’s POM, amend Jetty plugin to force ASM versions:
[xml]<plugin>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-maven-plugin</artifactId>
<version>${jetty.version}</version>
<dependencies>
<dependency>
<groupId>org.ow2.asm</groupId>
<artifactId>asm</artifactId>
<version>5.0.2</version>
</dependency>
<dependency>
<groupId>org.ow2.asm</groupId>
<artifactId>asm-commons</artifactId>
<version>5.0.2</version>
</dependency>
</dependencies>
<!– … –>
</plugin>
[/xml]

Then it should work 😉

PostHeaderIcon [DevoxxBE2013] OpenShift Primer: Get Your Applications into the Cloud

Eric D. Schabell, JBoss technology evangelist at Red Hat, demystifies OpenShift, a PaaS revolutionizing cloud deployment for Java EE, PHP, Ruby, and beyond. Author of the OpenShift Primer e-book, Eric—drawing from his integration and BPM expertise—guides attendees through rapid app migration, showcasing portability without code rewrites. His action-packed session deploys a Java project in minutes, contrasting OpenShift’s ease with cumbersome VMs.

OpenShift’s open-source ethos, Eric argues, delivers developer freedom: Git-based workflows, auto-scaling gears, and cartridge-based runtimes. From free tiers to enterprise scalability, it transforms cloud adoption, with European data centers addressing latency and privacy concerns.

Demystifying PaaS and OpenShift Fundamentals

Eric contrasts IaaS’s VM drudgery with PaaS’s streamlined abstraction. OpenShift, atop Red Hat Enterprise Linux, provisions environments via cartridges—pre-configured stacks for languages like Java.

He demos creating an app: rhc app create, Git push, and instant deployment, emphasizing no vendor lock-in.

Rapid Deployment and Portability

Portability reigns: Eric deploys a legacy Java EE app unchanged, leveraging JBoss EAP cartridges. PHP/Ruby examples follow, highlighting multi-language support.

This agnosticism, Eric notes, preserves investments, scaling from localhost to cloud seamlessly.

Scaling, Monitoring, and Security

Auto-scaling gears adjust to loads, Eric illustrates, with hot-deploy for zero-downtime updates. Monitoring via console tracks metrics; security integrates LDAP and SSL.

For Europe, Irish data centers mitigate latency, with GDPR-compliant options addressing data sovereignty.

Why OpenShift? Open-Source Advantages

Eric’s pitch: unmatched ease, no code changes, and open-source values. Free tiers on AWS East Coast suit demos, with paid plans offering local regions like Ireland.

He invites booth chats, contrasting OpenShift’s speed with competitors’ rigidity.

Links:

PostHeaderIcon [DevoxxFR2013] From Cloud Experimentation to On-Premises Maturity: Strategic Infrastructure Repatriation at Mappy

Lecturer

Cyril Morcrette serves as Technical Director at Mappy, a pioneering French provider of geographic and local commerce services with thirteen million euros in annual revenue and eighty employees. Under his leadership, Mappy has evolved from a traditional route planning service into a comprehensive platform integrating immersive street-level imagery, local business discovery, and personalized recommendations. His infrastructure strategy reflects deep experience with both cloud and on-premises environments, informed by multiple large-scale projects that pushed technological boundaries.

Abstract

Cloud computing excels at enabling rapid prototyping and handling uncertain demand, but its cost structure can become prohibitive as projects mature and usage patterns stabilize. This presentation chronicles Mappy’s journey with immersive geographic visualization — a direct competitor to Google Street View — from initial cloud deployment to eventual repatriation to on-premises infrastructure. Cyril Morcrette examines the economic, operational, and technical factors that drove this decision, providing a framework for evaluating infrastructure choices throughout the application lifecycle. Through detailed cost analysis, performance metrics, and migration case studies, he demonstrates that cloud is an ideal launch platform but often not the optimal long-term home for predictable, high-volume workloads. The session concludes with practical guidance for smooth repatriation and the broader implications for technology strategy in established organizations.

The Immersive Visualization Imperative

Mappy’s strategic pivot toward immersive geographic experiences required capabilities beyond traditional mapping: panoramic street-level imagery, 3D reconstruction, and real-time interaction. The project demanded massive storage (terabytes of high-resolution photos), significant compute for image processing, and low-latency delivery to users.

Initial estimates suggested explosive, unpredictable traffic growth. Marketing teams envisioned viral adoption, while technical teams worried about infrastructure bottlenecks. Procuring sufficient on-premises hardware would require months of lead time and capital approval — unacceptable for a market-moving initiative.

Amazon Web Services offered an immediate solution: spin up instances, store petabytes in S3, process imagery with EC2 spot instances. The cloud’s pay-as-you-go model eliminated upfront investment and provided virtually unlimited capacity.

Cloud-First Development: Speed and Agility

The project launched entirely in AWS. Development teams used EC2 for processing pipelines, S3 for raw and processed imagery, CloudFront for content delivery, and Elastic Load Balancing for web servers. Auto-scaling handled traffic spikes during marketing campaigns.

This environment enabled rapid iteration:
– Photographers uploaded imagery directly to S3 buckets
– Lambda functions triggered processing workflows
– Machine learning models (running on GPU instances) detected business facades and extracted metadata
– Processed panoramas were cached in CloudFront edge locations

Within months, Mappy delivered a functional immersive experience covering major French cities. The cloud’s flexibility absorbed the uncertainty of early adoption while development teams refined algorithms and user interfaces.

The Economics of Maturity

As the product stabilized, usage patterns crystallized. Daily active users grew steadily but predictably. Storage requirements, while large, increased linearly. Processing workloads became batch-oriented rather than real-time.

Cost analysis revealed a stark reality: cloud expenses were dominated by data egress, storage, and compute hours — all now predictable and substantial. Mappy’s existing data center, built for core mapping services, had significant spare capacity with fully amortized hardware.

Cyril presents the tipping point calculation:
Cloud monthly cost: €45,000 (storage, compute, bandwidth)
On-premises equivalent: €12,000 (electricity, maintenance, depreciation)
Break-even: four months

The decision to repatriate was driven by simple arithmetic, but execution required careful planning.

Repatriation Strategy and Execution

The migration followed a phased approach:

  1. Data Transfer: Used AWS Snowball devices to move petabytes of imagery back to on-premises storage. Parallel uploads leveraged Mappy’s high-bandwidth connectivity.

  2. Processing Pipeline: Reimplemented image processing workflows on internal GPU clusters. Custom scripts replaced Lambda functions, achieving equivalent throughput at lower cost.

  3. Web Tier: Deployed Nginx and Varnish caches on existing web servers. CDN integration with Akamai preserved low-latency delivery.

  4. Monitoring and Automation: Migrated CloudWatch metrics to Prometheus/Grafana. Ansible playbooks replaced CloudFormation templates.

Performance remained comparable: page load times stayed under two seconds, and system availability exceeded 99.95%. The primary difference was cost — reduced by seventy-five percent.

Operational Benefits of On-Premises Control

Beyond economics, repatriation delivered strategic advantages:
Data Sovereignty: Full control over sensitive geographic imagery
Performance Predictability: Eliminated cloud provider throttling risks
Integration Synergies: Shared infrastructure with core mapping services reduced operational complexity
Skill Leverage: Existing systems administration expertise applied directly

Cyril notes that while cloud elasticity was lost, the workload’s maturity rendered it unnecessary. Capacity planning became straightforward, with hardware refresh cycles aligned to multi-year budgets.

Lessons for Infrastructure Strategy

Mappy’s experience yields a generalizable framework:
1. Use cloud for uncertainty: Prototyping, viral growth potential, or seasonal spikes
2. Monitor cost drivers: Storage, egress, compute hours
3. Model total cost of ownership: Include migration effort and operational overhead
4. Plan repatriation paths: Design applications with infrastructure abstraction
5. Maintain hybrid capability: Keep cloud skills current for future needs

The cloud is not a destination but a tool — powerful for certain phases, less optimal for others.

Conclusion: Right-Sizing Infrastructure for Business Reality

Mappy’s journey from cloud experimentation to on-premises efficiency demonstrates that infrastructure decisions must evolve with product maturity. The cloud enabled rapid innovation and market entry, but long-term economics favored internal hosting for stable, high-volume workloads. Cyril’s analysis provides a blueprint for technology leaders to align infrastructure with business lifecycle stages, avoiding the trap of cloud religion or on-premises dogma. The optimal stack combines both environments strategically, using each where it delivers maximum value.

Links:

PostHeaderIcon [DevoxxFR2013] Developing Modern Web Apps with Backbone.js: A Live-Coded Journey from Empty Directory to Production-Ready SPA

Lecturer

Sylvain Zimmer represents the rare fusion of hacker spirit and entrepreneurial vision. In 2004, he launched Jamendo, which grew into the world’s largest platform for Creative Commons-licensed music, proving that open content could sustain a viable business model and empower artists globally. He co-founded Joshfire, a Paris-based agency specializing in connected devices and IoT solutions, and TEDxParis, democratizing access to transformative ideas. His competitive prowess shone in 2011 when his team won the Node Knockout competition in the Completeness category with Chess@home — a fully distributed chess AI implemented entirely in JavaScript, showcasing the language’s maturity for complex, real-time systems. Recognized as one of the first Google Developer Experts for HTML5, Sylvain recently solved a cryptographically hidden equation embedded in a Chromebook advertisement, demonstrating his blend of technical depth and puzzle-solving acumen. His latest venture, Pressing, continues his pattern of building elegant, user-centric solutions that bridge technology and human needs.

Abstract

In this intensely practical, code-only presentation, Sylvain Zimmer constructs a fully functional single-page application using Backbone.js from an empty directory to a polished, interactive demo in under thirty minutes. He orchestrates a modern frontend toolchain including Yeoman for project scaffolding, Grunt for task automation, LiveReload for instantaneous feedback, RequireJS for modular dependency management, and a curated selection of Backbone extensions to address real-world complexity. The session is a masterclass in architectural decision-making, demonstrating how to structure code for maintainability, scalability, and testability while avoiding the pitfalls of framework bloat. Attendees witness the evolution of a simple task manager into a sophisticated, real-time collaborative application, learning not just Backbone’s core MVC patterns but the entire ecosystem of best practices that define professional frontend engineering in the modern web era.

The Modern Frontend Development Loop: Zero Friction from Code to Browser

Sylvain initiates the journey with yo backbone, instantly materializing a complete project structure:

app/
  scripts/
    models/      collections/      views/      routers/
  styles/
  index.html
  Gruntfile.js

This scaffold is powered by Yeoman, which embeds Grunt as the task runner and LiveReload for automatic browser refresh. Every file save triggers a cascade of actions — CoffeeScript compilation, Sass preprocessing, JavaScript minification, and live injection into the browser — creating a development feedback loop with near-zero latency. This environment is not a convenience; it is a fundamental requirement for maintaining flow state and rapid iteration in modern web development.

Backbone Core Concepts: Models, Collections, Views, and Routers in Harmony

The application begins with a Task model that encapsulates state and behavior:

var Task = Backbone.Model.extend({
  defaults: {
    title: '',
    completed: false,
    priority: 'medium'
  },
  toggle: function() {
    this.save({ completed: !this.get('completed') });
  },
  validate: function(attrs) {
    if (!attrs.title.trim()) return "Title required";
  }
});

A TaskList collection manages persistence and business logic:

var TaskList = Backbone.Collection.extend({
  model: Task,
  localStorage: new Backbone.LocalStorage('tasks-backbone'),
  completed: function() { return this.where({completed: true}); },
  remaining: function() { return this.where({completed: false}); },
  comparator: 'priority'
});

The TaskView handles rendering and interaction using Underscore templates:

var TaskView = Backbone.View.extend({
  tagName: 'li',
  template: _.template($('#task-template').html()),
  events: {
    'click .toggle': 'toggleCompleted',
    'dblclick label': 'edit',
    'blur .edit': 'close',
    'keypress .edit': 'updateOnEnter'
  },
  initialize: function() {
    this.listenTo(this.model, 'change', this.render);
    this.listenTo(this.model, 'destroy', this.remove);
  },
  render: function() {
    this.$el.html(this.template(this.model.toJSON()));
    this.$el.toggleClass('completed', this.model.get('completed'));
    return this;
  }
});

An AppRouter enables clean URLs and state management:

var AppRouter = Backbone.Router.extend({
  routes: {
    '': 'index',
    'tasks/:id': 'show',
    'filter/:status': 'filter'
  },
  index: function() { /* render all tasks */ },
  filter: function(status) { /* update collection filter */ }
});

RequireJS: Enforcing Modularity and Asynchronous Loading Discipline

Global scope pollution is eradicated through RequireJS, configured in main.js:

require.config({
  paths: {
    'jquery': 'libs/jquery',
    'underscore': 'libs/underscore',
    'backbone': 'libs/backbone',
    'localstorage': 'libs/backbone.localStorage'
  },
  shim: {
    'underscore': { exports: '_' },
    'backbone': { deps: ['underscore', 'jquery'], exports: 'Backbone' }
  }
});

Modules are defined with explicit dependencies:

define(['views/task', 'collections/tasks'], function(TaskView, taskList) {
  return new TaskView({ collection: taskList });
});

This pattern ensures lazy loading, parallel downloads, and clear dependency graphs, critical for performance in large applications.

Backbone Extensions: Scaling from Prototype to Enterprise with Targeted Plugins

Backbone’s minimalism is a feature, not a limitation. Sylvain integrates extensions judiciously:

  • Backbone.LayoutManager: Manages nested views and layout templates, preventing memory leaks
  • Backbone.Paginator: Implements infinite scrolling with server or client pagination
  • Backbone.Relational: Handles one-to-many and many-to-many relationships with cascading saves
  • Backbone.Validation: Enforces model constraints with customizable error messages
  • Backbone.Stickit: Provides declarative two-way data binding for forms
  • Backbone.IOBind: Synchronizes models in real-time via Socket.IO

He demonstrates a live collaboration feature: when one user completes a task, a WebSocket event triggers an immediate UI update for all connected clients, showcasing real-time capabilities without server polling.

Architectural Best Practices: Building for the Long Term

The final application adheres to rigorous principles:

  • Single responsibility principle: Each view manages exactly one DOM element
  • Event-driven architecture: No direct DOM manipulation outside views
  • Separation of concerns: Models handle business logic, views handle presentation
  • Testability: Components are framework-agnostic and unit-testable with Jasmine or Mocha
  • Progressive enhancement: Core functionality works without JavaScript

Sylvain stresses that Backbone is a foundation, not a monolith — choose extensions based on specific needs, not trends.

Ecosystem and Learning Resources

He recommends Addy Osmani’s Backbone Fundamentals as the definitive free guide, the official Backbone.js documentation for reference, and GitHub for discovering community plugins. Tools like Marionette.js (application framework) and Thorax (Handlebars integration) are highlighted for larger projects.

The Broader Implications: Backbone in the Modern Frontend Landscape

While newer frameworks like Angular and React dominate headlines, Backbone remains relevant for its predictability, flexibility, and small footprint. It teaches fundamental MVC patterns that translate to any framework. Sylvain positions it as ideal for teams needing fine-grained control, gradual adoption, or integration with legacy systems.

Conclusion: From Demo to Deployable Reality

In under thirty minutes, Sylvain has built a production-ready SPA with real-time collaboration, offline storage, and modular architecture. He challenges attendees to fork the code, extend it, and ship something real. The tools are accessible, the patterns are proven, and the only barrier is action.

Links

PostHeaderIcon [DevoxxBE2013] Introducing Vert.x 2.0: Taking Polyglot Application Development to the Next Level

Tim Fox, the visionary project lead for Vert.x at Red Hat, charts the course of this lightweight, high-performance application platform for the JVM. With a storied tenure at JBoss and VMware—where he spearheaded HornetQ messaging and RabbitMQ integrations—Tim unveils Vert.x 2.0’s maturation into an independent powerhouse. His session delves into the revamped module system, Maven/Bintray reusability, and enhanced build tool/IDE synergy, alongside previews of Scala, Clojure support, and Node.js compatibility.

Vert.x 2.0 empowers polyglot, reactive applications, blending asynchronous eventing with synchronous legacy APIs via worker verticles. Tim’s live demos illustrate deploying modules dynamically, underscoring Vert.x’s ecosystem for mobile, web, and enterprise scalability.

Core API Refinements and Asynchronous Foundations

Tim highlights Vert.x’s event-driven core, refined in 2.0 with intuitive APIs for non-JVM languages. He demonstrates verticles—lightweight actors—for handling requests asynchronously, avoiding blocking calls.

This reactive model, Tim explains, scales to thousands of connections, ideal for real-time web apps, contrasting traditional thread-per-request pitfalls.

Module System and Ecosystem Expansion

The new module system, Tim showcases, leverages Maven repositories for seamless dependency management. He deploys a web server via module names, pulling artifacts from Bintray—eliminating manual installations.

This reusability fosters a vibrant ecosystem, with core modules for HTTP, MySQL (via reversed-engineered async drivers), and more, enabling rapid composition.

Build Tool and IDE Integration

Vert.x 2.0’s Maven/Gradle plugins streamline development, as Tim demos: configure a pom.xml, run mvn vertx:run, and launch a cluster. IDE support, via plugins, offers hot-reloading and debugging.

These integrations, Tim notes, lower barriers, allowing developers to iterate swiftly without Vert.x-specific tooling.

Polyglot Horizons: Scala, Clojure, and Node.js

Tim previews Scala/Clojure bindings, enabling functional paradigms on Vert.x’s event bus. Node.js compatibility, via drop-in modules, bridges JavaScript ecosystems, allowing polyglot teams to collaborate seamlessly.

This inclusivity, Tim asserts, broadens Vert.x’s appeal, supporting diverse languages without sacrificing performance.

Worker Verticles for Legacy Compatibility

For synchronous APIs like JDBC, Tim introduces worker verticles—executing on thread pools to prevent blocking. He contrasts with pure async MySQL drivers, offering flexibility for hybrid applications.

This pragmatic bridge, Tim emphasizes, integrates existing Java libraries effortlessly.

Links:

PostHeaderIcon SizeLimitExceededException: the request was rejected because its size (…) exceeds the configured maximum

Stacktrace

On deploying a WAR in Tomcat:
[java]org.apache.tomcat.util.http.fileupload.FileUploadBase$SizeLimitExceededException: the request was rejected because its size (128938160) exceeds the configured maximum (52428800)[/java]

Quick fix

Edit the file $CATALINA_HOME/webapps/manager/WEB-INF/web.xml
Replace the block
[xml] <multipart-config>
<!– 50MB max –>
<max-file-size>52428800</max-file-size>
<max-request-size>52428800</max-request-size>
<file-size-threshold>0</file-size-threshold>
</multipart-config> [/xml]

with:

[xml] <multipart-config>
<!– 200 MB max –>
<max-file-size>209715200</max-file-size>
<max-request-size>209715200</max-request-size>
<file-size-threshold>0</file-size-threshold>
</multipart-config>
[/xml]

PostHeaderIcon Mount a shared drive with VirtualBox

Case

You have to share some content between the host (eg: Linux Mint) and the resident (eg: Windows 7) systems, with Virtual Box.

Solution

  • In the resident system, go to Virtual Box, then: Machine >Settings > Shared Folders > on the right:Add > check automount and permanent, browse to the folder, let’s say D:\sharedFolder
  • Launch the VM.
  • Execute a terminal in host system:
  • Grant rights of group vboxsf to user mint:

sudo gpasswd -a mint vboxsf

  • Create a “local” folder:

mkdir ~/sharedFolder

  • Mount the folder sharedFolder on /home/ :

sudo mount -t vboxsf -o uid=1000,gid=1000 sharedFolder ~/sharedFolder

PostHeaderIcon Conférence Devoxx: « 42 IntelliJ IDEA tips in 45 minutes »

intellij-13

JetBrains, le studio russe éditeur de notre IDE préféré IntelliJ IDEA, était présent en force cette année à Devoxx. Outre un stand où il était possible de discuter avec deux développeurs de l’IDE, JetBrains a animé deux conférences.

A vrai dire, j’étais un peu réticent à l’idée d’assister à la conférence de Hadi Hariri (@hhariri, blog): une conférence au titre similaire l’année dernière m’avait laissé sur ma faim, m’apprenant peu de chose. Le compère de Hadi m’a convaincu de faire l’effort. Eh bien je n’ai pas été déçu.

Hadi a présenté plusieurs dizaines de tips et raccourcis clavier. En voici quelques uns que j’ai appris durant la conférence:

  • tout le monde connait le classique CtrlN ouvrant une classe. L’astuce consiste, avec CtrlN (et de manière similaire avec CtrlShiftN) à rajouter “:<numéro de ligne>” en suffixe à la classe à ouvrir : celle-ci s’ouvrira au numéro de ligne indiqué, par exemple: CtrlN > CHM:12 ouvrira ConcurrentHashMap en ligne 12.
  • Désactiver la navigation bar (View > Navigation Bar). Cela permet, via un Ctrl-Home, d’afficher à tout instant la barre de navigation en restant dans la fenêtre d’édition.
  • Ctrl-Shift-E permet de restreindre l’application de Ctrl-E aux fichier récemment édités.
  • Ctrl-Shift-F7 permet, dans le cas général, de mettre en surbrillance les occurrences d’un champ, d’une variable ou d’une méthode. En appliquant Ctrl-Shift-F7 sur un return ou un throw, respectivement tous les points de sortie de la méthode ou de levée d’exception seront surlignés.
  • Shift-F4 permet d’externaliser une fenêtre, c’est-à-dire de la rendre flottante et indépendante du reste de l’IDE
  • Symétriquement à Ctrl-W qui étend la sélection, Ctrl-Shift-W la réduit
  • Ctrl-Alt-Shift-J passe l’éditeur en mode multi-caret: ce mode permet d’effectuer des opérations sur plusieurs lignes en même temps, mais pas forcément sur la même colonne. Le mode colonne (Alt-Shift-Insert) apparait ainsi comme une application restreinte du mode multi-caret.
  • Dans un fichier HTML, en tapant par exemple table>tr>td, IDEA complétera le code en <table><tr><td></td></tr></table> (toutefois il semblerait que cette fonctionnalité soit liée au plugin Zen-coding).
  • Shift, Shift: c’est le “raccourci ultime” vers la recherche sur tout ce qui existe dans IDEA

En conclusion, IntelliJ IDEA confirme son statut de Rolls des outils de développement Java, dont la maitrise complète s’acquière par des années de pratique et d’exploration.

PostHeaderIcon Conférence Devoxx: Introduction aux Google Glass

640px-Image_GoogleGlass

J’ai eu la chance d’assister à l’édition 2014 de Devoxx FR.
La première conférence que j’ai suivie était “Introduction aux Google Glass”, par Alain Régnier (@AltoLabs, +AlainRegnier), dont voici un résumé:

Alain fait partie d’un programme, les Google Glass Explorers, c’est-à-dire les heureux élus qui ont pu se procurer des paires de lunettes Google Glass. Théoriquement, seuls les résidents nord-américains peuvent souscrire au programme ; néanmoins, l’estimation du nombre de Google Glass circulant en France oscille entre 30 et 50 paires.

A mi-chemin entre des lunettes Star Trek et des scouters de Dragon Ball, les Google Glass ressemblent à des lunettes classiques dont l’une des branche est plus épaisse et dont un verre est surmonté d’un prisme et d’une mini-webcam. Le petit bijou de technologie embarque, sous le capot, de nombreux capteurs et connecteurs: visuel, sonore, bluetooth, wifi, et même infrarouge.

Les Google Glass affichent par défaut des informations de quatre types: texte, image, vidéo et une version limitée d’HTML. Elles sont contrôlables de plusieurs façons: à la voix (en lançant le mot magique “OK Glass!”), via un trackpad, une application “web” MyGlassWeb ou enfin une appli Android MyGlassMobile.
En tant qu’outil de développement, un Android Screen Monitor (un simple client ADB) permet d’afficher sur l’écran du PC ce qui est visible par la personne portant les Google Glass: en d’autres termes, le flux de la webcam sur lequel est superposé l’affichage du prisme.
Concernant le développement proprement dit, trois méthodes sont disponibles:

  • Mirror API: les Glass communiquent avec un serveur hébergé par Google, qui redirige vers un serveur concret
  • GDK: il s’agit d’un kit de développement similaire à celui d’Android
  • WearScript: c’est une librairie, non-officielle, permettant de programmer les Glass en JavaScript

Alain a réalisé une démonstration d’utilisation des Glass. Avouons-le: c’est bluffant… En tant que développeur, les perspectives ouvertes par un tel objet connecté sont très enthousiasmantes! Le plus dur va encore être d’attendre que les Glass soient officiellement disponibles dans nos contrées européennes…