Archive for the ‘General’ Category
Jonathan LALOU recommands… Sidney COHEN
I wrote the following notice on Sidney COHEN‘s profile on LinkedIn:
Sidney has a deep knowledge of the technologies he uses, such as Tibco systems. In several occasions, he was able to liaise with the key people of the right teams, perform the good actions and deliver the best solutions to various issues. Moreover, he has learnt complex Java technologies in a short time.
I recommend Sidney as a good worker and a friendly team mate
Jonathan LALOU recommands… Yann BLAZART
I wrote the following notice on Yann BLAZART‘s profile on LinkedIn:
Yann has a wide knowledge in technologies, specifications and implementations of JEE. He has shown his ability to propose and implement technical solutions to various technical and functional issues and subjects. Besides, by helping his colleagues, he succeeded in raising the general level of the team. All this is why I recommend Yann as a Java developer and architect.
[DevoxxFR 2016] The Blockchain in Detail
The blockchain emerged as a revolutionary technology, capturing significant attention with its potential to reshape industries and redefine trust. At Devoxx France 2016, Benoît Lafontaine and Yann Rouillard delivered a comprehensive university session delving into the intricacies of this much-hyped technology, moving beyond the buzzwords to explore its technical underpinnings, evolutions, and practical implications. Their detailed exposition covered the foundational principles of Bitcoin, the expanded capabilities introduced by platforms like Ethereum, the concept and implementation of smart contracts, various use cases, and the broader societal questions raised by distributed ledger technologies.
Demystifying the Blockchain: The Foundation of Bitcoin
To truly grasp the essence of blockchain, one must first understand its initial and most prominent implementation: Bitcoin. More than just a digital currency, Bitcoin introduced a novel distributed ledger technology that enables secure, transparent, and tamper-resistant record-keeping without relying on a central authority. The core of Bitcoin’s technical functioning lies in its chain of blocks. Each block contains a list of verified transactions, a timestamp, and a reference (cryptographic hash) to the preceding block, creating an immutable historical record.
The lifecycle of a Bitcoin transaction begins when a user initiates a transfer of value. This transaction is broadcast to the Bitcoin network. Nodes on the network validate the transaction based on a set of rules, ensuring the sender has sufficient funds and the transaction is correctly formatted. Once validated, the transaction is added to a pool of unconfirmed transactions.
The process of adding new blocks to the chain is handled by miners through a mechanism called Proof-of-Work (PoW). Miners compete to solve a complex computational puzzle, which essentially involves finding a number (a “nonce”) such that when added to the block data and hashed, the resulting hash meets certain criteria (e.g., starts with a specific number of zeros). This hashing process is computationally intensive but easy to verify. The first miner to find a valid nonce and create a new block broadcasts it to the network. Other nodes verify the block’s validity, including the PoW, and if correct, add it to their copy of the blockchain. This block then becomes the latest link in the chain.
Cryptography plays a vital role in ensuring the security and integrity of Bitcoin. Hashing algorithms (like SHA-256 used in Bitcoin) produce a unique fixed-size string (the hash) from input data. Even a minor change in the input data results in a completely different hash. This property is used to link blocks and verify data integrity. Digital signatures, based on public-key cryptography, are used to authorize transactions. Each user has a pair of keys: a private key (kept secret) used to sign a transaction and a public key (shared freely) used by others to verify the signature, ensuring that only the owner of the funds can authorize their transfer.
One of the fundamental challenges in a distributed system like Bitcoin is reaching consensus among participants on the correct state of the ledger, especially in the presence of potentially malicious actors. Proof-of-Work is Bitcoin’s consensus mechanism. By requiring significant computational effort to create a new block, it makes it economically infeasible for a malicious party to alter past transactions. To successfully tamper with a block, an attacker would need to redo the PoW for that block and all subsequent blocks faster than the rest of the network combined. This leads to the concept of the 51% attack, where if an entity controls more than 51% of the network’s total mining power, they could potentially manipulate transactions. However, the sheer scale of the Bitcoin network’s hashing power makes achieving a 51% attack incredibly difficult and prohibitively expensive.
Beyond Bitcoin: Exploring Ethereum and Smart Contracts
While Bitcoin demonstrated the power of a decentralized ledger for peer-to-peer currency transactions, its scripting language is intentionally limited, primarily designed for simple payment logic. The next wave of blockchain innovation arrived with platforms like Ethereum, which expanded the potential of blockchain technology far beyond just cryptocurrencies. Ethereum introduced the concept of a decentralized world computer capable of executing code, paving the way for a wide range of decentralized applications.
Ethereum’s core innovation is the Ethereum Virtual Machine (EVM), a Turing-complete virtual machine that can execute code deployed on the Ethereum blockchain. This code comes in the form of smart contracts. A smart contract is essentially a program stored on the blockchain that automatically executes predefined actions when specific conditions are met. Unlike traditional contracts, which are interpreted and enforced by legal systems, smart contracts are self-executing and enforced by the code itself, running on the decentralized and immutable ledger.
The primary language for writing smart contracts on Ethereum is Solidity, a high-level, contract-oriented language. Solidity’s syntax is influenced by languages like C++, Python, and JavaScript. Benoît and Yann provided examples of Solidity code, illustrating how to define state variables, functions, and events within a contract. These contracts can represent anything from simple tokens and voting mechanisms to complex financial agreements and decentralized autonomous organizations.
Deploying a smart contract involves compiling the Solidity code into EVM bytecode and then sending a transaction to the Ethereum network containing this bytecode. Once deployed, the contract resides at a specific address on the blockchain and its functions can be invoked by other users or contracts through transactions. The execution of smart contract functions requires gas, a unit of computation on the Ethereum network, paid for in Ether (ETH), Ethereum’s native cryptocurrency. This gas mechanism incentivizes efficient code and prevents denial-of-service attacks. The ability to write and deploy arbitrary code on a decentralized and immutable ledger opened up a vast landscape of possibilities for creating decentralized applications (DApps).
Alternative Consensus Mechanisms and Decentralized Autonomous Organizations
While Proof-of-Work (PoW) has proven effective for Bitcoin’s security, its energy consumption is a significant concern. This has led to research and development into alternative consensus mechanisms. Proof-of-Stake (PoS) is a prominent alternative where the right to create new blocks and validate transactions is determined by the amount of cryptocurrency a validator holds and is willing to “stake” as collateral. In PoS, validators are chosen to create blocks based on their stake size and other factors, rather than computational power. If a validator attempts to validate fraudulent transactions, they risk losing their staked amount. PoS is generally considered more energy-efficient than PoW. At the time of the talk, Ethereum was planning a transition from PoW to a PoS consensus mechanism called Casper, a transition that has since been completed.
The capabilities of smart contracts extend to enabling entirely new forms of organization. A Decentralized Autonomous Organization (DAO) is an organization whose rules and decision-making processes are encoded directly into smart contracts on a blockchain. Once deployed, a DAO operates autonomously according to its pre-programmed rules, without the need for central management. Funding, governance, and operations are all managed through the smart contracts and the participation of token holders who vote on proposals. The talk touched upon the concept of DAOs and their potential to create more transparent and democratic organizational structures. However, the early history of DAOs also includes cautionary tales, such as “The DAO” hack in 2016, which highlighted the critical importance of rigorous security auditing for smart contracts managing significant assets.
Real-World Applications and Societal Impact
The potential applications of blockchain technology span numerous industries beyond finance. Benoît and Yann explored various use cases that were being studied or already implemented in 2016. In finance, beyond cryptocurrencies, blockchain can streamline cross-border payments, facilitate peer-to-peer lending, and improve trade finance. In supply chain management, it can provide transparent and verifiable tracking of goods from origin to destination. For identity management, blockchain could enable self-sovereign identity solutions, giving individuals more control over their personal data. Other potential applications discussed included decentralized marketplaces, intellectual property management, voting systems, and even decentralized energy grids.
The advantages offered by blockchain technology, such as transparency (for public blockchains), immutability, security through cryptography, and disintermediation (removing the need for central authorities), make it attractive for scenarios where trust and verification are paramount. However, challenges remain, including scalability limitations of some blockchains, regulatory uncertainty, the difficulty of correcting errors on an immutable ledger, and the complexity of developing and securing smart contracts.
Beyond the technical and practical applications, blockchain introduces profound social implications. It challenges existing power structures by enabling decentralization and disintermediation. It raises questions about governance in decentralized networks, the legal status of smart contracts, and the impact on privacy in a transparent ledger. The technology empowers individuals with greater control over their assets and data but also requires a higher degree of individual responsibility. The discussion during the session underscored that blockchain is not just a technical innovation but also a socio-technical one with the potential to reshape how we organize and interact in the digital age.
In conclusion, the Devoxx France 2016 session on the blockchain provided a timely and detailed exploration of this burgeoning technology. By dissecting the mechanics of Bitcoin, presenting the advancements brought by Ethereum and smart contracts, discussing alternative consensus models and DAOs, and examining a variety of use cases and societal impacts, Benoît Lafontaine and Yann Rouillard offered attendees a clearer understanding of what lay beyond the hype and why this technology warranted serious attention from developers and businesses alike. The session emphasized that while challenges existed, the potential for blockchain to drive innovation across numerous sectors was undeniable.
Links:
- Benoît Lafontaine’s Blog Post on OCTO Talks!
- OCTO Technologies official website
- Bitcoin official website
- Ethereum official website
- Solidity official website
- The DAO hack explanation
Hashtags: #Blockchain #Bitcoin #Ethereum #SmartContracts #ProofOfWork #ProofOfStake #DAO #Cryptocurrency #Decentralization #DistributedLedger #FinTech #Web3 #Solidity #EVM #OCTOTechnologies #BenoitLafontaine #YannRouillard #DevoxxFR
[DevoxxFR2015] Reactive Applications on Raspberry Pi: A Microservices Adventure
Alexandre Delègue and Mathieu Ancelin, both engineers at SERLI, captivated attendees at Devoxx France 2015 with a deep dive into building reactive applications on a Raspberry Pi cluster. Leveraging their expertise in Java, Java EE, and open-source projects, they demonstrated a microservices-based system using Play, Akka, Cassandra, and Elasticsearch, testing the Reactive Manifesto’s promises on constrained hardware.
Embracing the Reactive Manifesto
Alexandre opened by contrasting monolithic enterprise stacks with the modular, scalable approach of the Reactive Manifesto. He introduced their application, built with microservices and event sourcing, designed to be responsive, resilient, and elastic. Running this on Raspberry Pi’s limited resources tested the architecture’s ability to deliver under constraints, proving its adaptability.
This philosophy, Alexandre noted, prioritizes agility and resilience.
Microservices and Event Sourcing
Mathieu detailed the application’s architecture, using Play for the web framework and Akka for actor-based concurrency. Cassandra handled data persistence, while Elasticsearch enabled fast search capabilities. Event sourcing ensured a reliable audit trail, capturing state changes as events. The duo’s live demo showcased these components interacting seamlessly, even on low-powered Raspberry Pi hardware.
This setup, Mathieu emphasized, ensures robust performance.
Challenges of Clustering on Raspberry Pi
The session highlighted configuration pitfalls encountered during clustering. Alexandre shared how initial deployments overwhelmed the Raspberry Pi’s CPU, causing nodes to disconnect and form sub-clusters. Proper configuration, tested pre-production, resolved these issues, ensuring stable heartbeats across the cluster. Their experience underscored the importance of thorough setup validation.
These lessons, Alexandre noted, are critical for constrained environments.
Alternative Reactive Approaches
Mathieu explored other reactive libraries, such as Spring Boot with reactive Java 8 features and async servlets, demonstrating versatility beyond Akka. Their demo included Gatling for load testing, though an outdated plugin caused challenges, since resolved natively. The session concluded with a nod to the fun of building such systems, encouraging experimentation.
This flexibility, Mathieu concluded, broadens reactive development options.
Links:
[DevoxxFR2015] Evolving Infrastructure Without Downtime: CloudBees’ Journey
Nicolas De Loof, an Apache Maven committer and founder of BreizhJUG, delivered an engaging session at Devoxx France 2015, stepping in for his colleague Michael Neale. Representing CloudBees, Nicolas shared the company’s evolution from a fragmented startup to a robust, globally available system, focusing on seamless infrastructure migrations without interrupting service. His narrative, infused with humor and practical insights, highlighted transitions to multi-tenant architectures and Docker-based deployments.
From Startup Chaos to Structured Systems
Nicolas began by outlining CloudBees’ early days, marked by ad-hoc technical decisions that later demanded refinement. Initial choices, such as a custom LXC-based solution, became obsolete as the company scaled. He described the challenge of maintaining zero downtime across a global user base, necessitating careful planning to evolve infrastructure while keeping services operational.
This journey, Nicolas emphasized, required strategic foresight.
Migrating to Multi-Tenant Architecture
The shift to a multi-tenant build-on-demand system was a cornerstone of CloudBees’ transformation. Nicolas detailed how this migration, spanning months, consolidated resources to improve efficiency without impacting users. By gradually phasing in the new architecture, the team ensured continuity, addressing regrets from earlier single-tenant designs that strained scalability.
This transition, he noted, enhanced resource utilization.
Adopting Docker for Containerization
Replacing LXC with Docker marked another pivotal change. Nicolas explained how Docker’s containerization simplified deployment and management, offering greater flexibility than the bespoke LXC setup. The migration, executed incrementally, maintained service uptime, with Docker’s lightweight containers streamlining operations across CloudBees’ infrastructure.
This adoption, Nicolas highlighted, modernized their platform.
Operational Best Practices
Drawing from CloudBees’ experience, Nicolas stressed the importance of health checks, monitoring, and termination strategies to prevent service disruptions. His lighthearted “Salut les Geeks” conclusion, inspired by a YouTube series, underscored practical advice: robust monitoring prevents “blonde” moments where systems fail silently. He urged teams to integrate these practices early to avoid production chaos.
These strategies, he concluded, ensure resilient operations.
Links:
[DevoxxFR2015] Advanced Streaming with Apache Kafka
Jonathan Winandy and Alexis Guéganno, co-founder and operations director at Valwin, respectively, presented a deep dive into advanced Apache Kafka streaming techniques at Devoxx France 2015. With expertise in distributed systems and data warehousing, they explored how Kafka enables flexible, high-performance real-time streaming beyond basic JSON payloads.
Foundations of Streaming
Jonathan opened with a concise overview of streaming, emphasizing Kafka’s role in real-time distributed systems. He explained how Kafka’s topic-based architecture supports high-throughput data pipelines. Their session moved beyond introductory concepts, focusing on advanced writing, modeling, and querying techniques to ensure robust, future-proof streaming solutions.
This foundation, Jonathan noted, sets the stage for scalability.
Advanced Modeling and Querying
Alexis detailed Kafka’s ability to handle structured data, moving past schemaless JSON. They showcased techniques for defining schemas and optimizing queries, improving performance and maintainability. Q&A revealed their use of a five-node cluster for fault tolerance, sufficient for basic journaling but scalable to hundreds for larger workloads.
These methods, Alexis highlighted, enhance data reliability.
Managing Kafka Clusters
Jonathan addressed cluster management, noting that five nodes ensure fault tolerance, while larger clusters handle extensive partitioning. They discussed load balancing and lag management, critical for high-volume environments. The session also covered Kafka’s integration with databases, enabling real-time data synchronization.
This scalability, Jonathan concluded, supports diverse use cases.
Community Engagement and Resources
The duo encouraged engagement through Scala.IO, where Jonathan organizes, and shared Valwin’s expertise in data solutions. Their insights into cluster sizing and health monitoring, particularly in regulated sectors like healthcare, underscored Kafka’s versatility.
This session equips developers for advanced streaming challenges.
Links:
[DevoxxFR2015] Harnessing Java 8: Building Real-Time Applications
Trisha Gee, a Java Champion and Developer Advocate at JetBrains, showcased the power of Java 8 at Devoxx France 2015 by live-coding a real-time dashboard application. With extensive experience in high-performance Java systems, Trisha demonstrated how streams, lambdas, and the new date/time API can create robust, end-to-end applications using core Java libraries.
Crafting a Real-Time Dashboard
Trisha kicked off by building a JavaFX-based dashboard that consumed a high-velocity data feed, simulating Twitter sentiment analysis. She leveraged Java 8 streams to process collections efficiently, transforming raw data into meaningful insights. Lambdas simplified code, replacing verbose loops with concise expressions. Her demo highlighted real-time updates, with the dashboard dynamically rendering mood data.
This approach, Trisha emphasized, showcases Java 8’s expressiveness.
Streamlining Data Manipulation
Using streams, Trisha demonstrated filtering and aggregating data to display sentiment trends. The joining collector automatically formatted outputs, eliminating manual string manipulation. She also touched on the new date/time API, ensuring precise temporal handling. Despite a glitch requiring a restart, the dashboard successfully visualized real-time Twitter data, proving Java 8’s suitability for dynamic applications.
Her live coding, Trisha noted, demystifies complex features.
JavaFX for Modern UIs
Trisha integrated JavaFX to create a responsive UI, binding data to visual components. She contrasted fake and real mood services, showing how streams handle both seamlessly. Q&A was limited due to time, but she shared a comprehensive resource page, including WebSocket and JavaFX references, encouraging further exploration.
This session positions Java 8 as a versatile tool for modern development.
Links:
Retours sur Devoxx France 2016 (3): “Retours sur Java 8” par JM Doudoux
L’amphi bleu -le plus grand et confortable- a ete reserve pour une conference ne paie pas de mine a priori: “Retours sur Java 8”. Pourquoi un tel engouement? La raison probable en est le speaker: Jean-Michel Doudoux, connu dans la planete Java francophone comme “jmdoudoux”.
JM Doudoux
Si un developpeur Java francophone a passe les 15 dernieres annees a l’isolement dans un goulag en Coree du Nord, voici un rappel: JM Doudoux est un GRAND monsieur, surtout connu pour ses tutoriels Java, qu’il publie continument depuis 1999. 17 ans, ca fait un bail… Jean-Michel est l’un des Java Champions francais.
Personnellement, je suis redevable a Jean-Michel d’avoir lu des dizaines et meme des centaines de pages de ses tutoriels, quand j’ai commence a utiliser Java en milieu professionnel et non seulement academique, et ce pendant des annees. Je ne compte pas les heures passees dessus, et je repense a la malheureuse imprimante de SGF au 5e etage qui avait la lourde tache d’imprimer chaque jour son lot de papier.
Pour revenir au personnage, premiere observation: Jean-Michel est populaire: l’amphi est plein, avec du monde dehors qui fait du forcing pour tenter de rentrer. Parmi l’assistance, un tiers de la salle n’utilise pas Java 8, et on reconnait des grosses pointures de la communaute…
Jean-Michel est modeste. Il s’etonne qu’on lui ait affecte le grand amphi, et plus encore que cet amphi soit plein.
Jean-Michel est accessible. Il ne refuse pas de parler et faire des selfies apres sa conference et dans les couloirs.
Enfin, Jean-Michel est quebecois ou, du moins, il parle parle comme les habitants de la Belle Province. Il mentionne “fabriques”, “didacticiels” et autres “bonnes pratiques” la ou d’autres cedent aux courants dominants des “factories”, “tutorials” et “best practices”.
(Note: pour ma part je reste sceptique sur certaines traductions, et prefere recourir aux termes originaux)
Retours sur Java 8
Jean-Michel revient d’abord sur les notions de best practice, definie comme une facon de resoudre un probleme meilleure que les autres. Les best practices sont mouvantes (dans les mots: “empiriques, contextuelles et mouvantes”) avec le temps car les methodologies, materiels et logiciels evoluent, et necessitent donc d’etre periodiquement remises en cause.
Ensuite Jean-Michel revient sur les differents apports de Java 8: comme l’Optionnal, les streams, les default methods et la nouvelle API Date, etc.
Si je resume grossierement l’intervention de Jean-Michel: utiliser les Optionnal en faisant attention, idem pour les streams.

With the master
Liens
Retours sur Devoxx France 2016 (2): Codenvy

J’ai pu feliciter en direct l’equipe de Codenvy, qui tenait un stand promouvant Eclipse Che.
Histoire personnelle des IDE
Je ne suis pas plus un fan d’Eclipse. Mon opinion est que l’IDE a grandi de maniere anarchique et manque de nombreuses features d’IDEA, bien que le retard se comble avec le temps. La plethore de plugins fait que les incompatibilites sont nombreuses, et l’outil se revele lent au final. A vouloir etre le plus generaliste possible et contenter tout le monde, on finit par ne plus satisfaire meme con coeur de cible.
J’ai commence a coder en Java sous XEmacs ; mon premier IDE etait Sun Forte en 2002 aux Etats-Unis. J’ai continue avec Netbeans chez Philips en 2003 et bascule sous Eclipse en 2005 a mon entree chez Sungard-Finance (alias GP3). A l’epoque, la facilite de mise a jour et d’integration des plugins -en l’occurence Maven 1, Tomcat, Subversion, SOAP et Python- m’avaient conquis. Deja pourtant je devais regulierement passer du temps a gerer les conflits et nettoyer le workspace.
Mes premiers essais avec IDEA en 2007 ont ete laborieux, et sans l’insistance de mon equipe je dois avouer que j’y aurais renonce. J’ai encore du utiliser ponctuellement Eclipse en 2010, quand le support de GWT par IDEA n’etait pas convenable, puis en 2013 a l’occasion de l’ecriture de mon premier ouvrage.
Quant a Codenvy.com, j’ai fait sa connaissance en avril 2015. Je l’utilise regulierement pour de petites operations, lorsque ma machine principale de developpement n’est pas disponible.
Aussi c’est dire que je n’ai pas aborde le stand de la Fondation Eclipse avec un oeil favorable. Mais j’ai vite change d’avis quand je me suis fait aborder par un exposant et lui ai dit que, sans vouloir troller, a part Codenvy je ne comptais pas consacrer du temps a Eclipse pour les mois qui viennent.
Codenvy
La philosophie de Codenvy est que notre code source est heberge sous Github, notre base de donnee est sur le cloud et notre application est deployee chez Amazon WebServices. Alors pourquoi ne pas coder directement sur le cloud?
Codenvy met a disposition un IDE en ligne, ma foi bien pratique. Ne nous leurrons pas, en l’etat actuel Codenvy ne peut pas remplacer IntelliJ IDEA, a moins d’etre pret a sacrifier 50% de sa productivite, ce que peu de monde peut se permettre, surtout en milieu professionnel.
Techniquement, Codenvy se presente comme une application web incluse dans le navigateur. Sous le capot tourne le moteur d’Eclipse Che, le futur de l’IDE de la Fondation Eclipse.

Codenvy ne propose pas qu’un editeur de fichiers. Des clients Git et SVN permettent de recuperer ou modifier le code sur un gestionnaire de versions. Enfin, il offre aussi des espaces d’execution sous Docker. Il est ainsi possible de coder une application web, monter une base de donnees MongoDB dans un Docker et deployer le WAR dans un autre conteneur.

Use cases
Les use cases de Codenvy sont assez varies. J’en vois deux principaux:
Programmation classique
L’editeur de Codenvy permet de developper en environnement decentralise et distribue. L’IDE en ligne a le meme interet que l’editeur de documents sous Google Drive: il est accessible depuis partout, sans necessite d’installer quelque software que ce soit ; de plus, il est toujours mis a jour, sans intervention de l’utilisateur.
Un interet sous-jacent est que le developpement se fait sous un client leger, et la puissance de l’ordinateur hote n’est pas fortement sollicitee: en effet, les operations fortement consommatrices de resources (comme la compilation ou les divers appels IO) sont effectues sur le cloud. Aussi, il devient possible, comme cela m’arrive de temps en temps, de sortir un netbook de 10 ou 11″ de diagonale, avec un simple Atom et 2Go de RAM, pour corriger un bug, commiter et pusher sur Github puis redeployer une instance de prod. Oui oui, a partir un PC qui n’a rien d’un Core i7/SSD/16Go.
Partage d’espace de developpement
Le second cas d’utilisation est le partage de bug/cas via des plateformes sur le net. En cas de probleme, inutile d’uploader un zip, le recuperer, installer le projet, et les librairies, etc. sur le poste local. Non, la personne levant le bug cree juste un espace sur Codenvy, avec la bonne configuration et le cas de reproduction de bug, cree une issue dans Github et poste un lien sur Stackoverflow. Le reste de la planete developpeur peut acceder a l’espace, eventuellement le dupliquer, et resoudre le probleme. Cerise sur le gateau: le ticket sous Github se met a jour avec les commits.
Limitations
Bien evidemment, Codenvy en tant que client leger vient avec son lot de limitations. Tout d’abord, il n’est pas possible d’installer de plugins, par consequent il devient difficile de gerer les langages exotiques.
Les principaux reproches que je fais a Codenvy sont les suivants:
- pas de completion automatique. Il faudra connaitre ses APIs! 😀
- support assez limite de Git: les merges sont une galere.
- raccourcis claviers non configurables. Une vraie torture quand on doit tout faire a la souris.
- support limite des projets Maven multimodules. Il faut ruser pour pouvoir deployer un WAR.
- pas de support des telephones mobiles et des tablettes. L’experience est vraiment penible.
Stevan Le Meur, qui travaille (EDIT: dirige?) dans l’equipe de developpement m’a fait une presentation de la version actuellement beta. Elle corrige de nombreux problemes. J’espere pouvoir acceder rapidement a la beta pour me faire une idee plus precise, mais ce que j’ai pu voir etait enthousiasmant.

L’interface de la version beta
Liens
- Codenvy:
- website,
- compte Twitter,
- hashtag #Codenvy
- Eclipse Che
- Stevan Le Meur:
Retours sur Devoxx France 2016 (1): generalites
Je demarre une serie d’articles sur Devoxx France 2016.
Je n’ai pas pu assister a DevoxxFR en 2015, aussi c’est ma premiere fois experience au Palais des Congres a Porte Maillot. Connaissant cet endroit pour avoir participe a d’autres salons et conferences, je ne suis pas totalement perdu.
Quelques remarques a chaud, en vrac:
General
Les sacs ne sont pas distribues a l’entree, mais a l’interieur, dans un stand “goodies”, aux horaires d’ouverture aleatoires pendant les pauses… Resultat: n’ayant pu me procurer mon sac et le cahier que tard, et je n’ai pu prendre de note ce premier jour.
D’autre part, il faisait chaud dans chaque salle, surtout en amphi Maillot… Je compatis avec les speakers qui ont du parler sous les spots de lumiere :-D.
Choix des sujets
Bien sur, la plupart des conferences etait reellement interessante ; neanmoins, il faut le reconnaitre, somme toute assez limitee dans la diversite: les sujets tournent principalement autour des microservices, Android, devops, BigData, etc.
Je ne crains pas que quiconque se soit ennuye. Cependant, si ces sujets sont “tendance” (et concernent les “hipsters” selon les mots de Matt Raible), concretement ils ne parlent pas forcement a tous les developpeurs. En effet, en France la structure du marche du travail fait que, pour une large part d’entre eux, les developpeurs travaillent chez des “grands comptes”, de type banques, assurance, telecom, etc. Je doute que chez les “grands comptes”, beaucoup de monde travaille en 2016 en mode agile sur des micro-services deployes sur des conteneurs Docker heberges sur un cloud orchestre par Kubernetes, pour traiter du BigData en temps reel dont les resultats sont pushes vers une tablette Android via un API REST securisee en HTTPS 2…
De mon cote j’ai la chance de travailler chez une grande variete de clients (finance, assurance, mais aussi PME et startups), donc pas trop de frustration. Cependant, si j’approuve l’ouverture a d’autres technologies et sujets, je regrette que Devoxx s’eloigne peu a peu de ses fondamentaux: l’ecosysteme Java.
Je suis bien conscient qu’il est impossible de contenter tout le monde. Mais peut-etre faudrait-il prevoir plusieurs “niveaux”, pour permettre d’avoir a la fois des ouvertures sur le futur, des initiations pour debutants pour certaines technologies, ou bien des retours d’experience sur comment pousser dans ses retranchements un “vieux” framework utilise sur du code legacy (sachant que dans nos metiers, “vieux” signifie “age de plus de 2 ans”).
Stands et goodies
Ca allait du minimal (stand petit, pas d’interlocuteur technique) avec quelques stylos, au plus original, en passant par les classiques distributions de tee-shirts. Cette annee, la couleur dominante etait le noir.
Opinions personnelles:
- les stands les plus grands ne sont pas forcement les meilleurs.
- en general, les geeks preferent parler a des ITs, plutot qu’a des administratifs/recruteurs/commerciaux qui sont persuades qu’AJAX est un club de football des Pays-Bas.
Sponsors
On remarque l’absence de Google et Oracle pour les “gros”, et d’Octo et Xebia du cote des SSII francaises. Evidemment, les entreprises communiquent beaucoup quand elles sont sponsors, beaucoup moins lorsqu’elles ne le sont pas. Nous en sommes donc reduits a speculer sur le “desinteret” de ces entreprises pour Devoxx: reel retrait strategique, desaccords sur ce que doit etre Devoxx, ou simple question de business/marketing.
Parmi les presents, Zenika et TalanLabs ont reussi a s’imposer dans le carre de sponsors platinium, augmentant leur aura aupres de la communaute, aux cotes d’IBM et d’Amazon. J’ai apprecie l’idee du “Startup Village”: cela donne un apercu de nos jeunes pousses francaises.
En dehors de ca, les autres SSII qui se qualifient de “specialistes” de Java (a mon humble avis, il conviendrait plutot de les decrire comme “specialisees en Java”: aucune SSII n’emploie que des specialistes, et encore moins des cadors, du Java): Ippon, Mirakl, 42, Arolla, Aneo, etc.
Enfin, de nombreux editeurs et intervenant de la galaxie Java etaient presents: citons Jahia, IntelliJ, Sonar, Github, StackOverflow, etc. J’en ai profite pour remercier directement les equipes pour le travail et leurs apports a la communaute.
PS
Merci a TalanLabs de faire en sorte que ses consultants puissent assister a Devoxx 😉