Recent Posts
Archives

Archive for the ‘en-US’ Category

PostHeaderIcon [DevoxxUS2017] Continuous Optimization of Microservices Using Machine Learning by Ramki Ramakrishna

At DevoxxUS2017, Ramki Ramakrishna, a Staff Engineer at Twitter, delivered a compelling session on optimizing microservices performance using machine learning. Collaborating with colleagues, Ramki shared insights from Twitter’s platform engineering efforts, focusing on Bayesian optimization to tune microservices in data centers. His talk addressed the challenges of managing complex workloads and offered a vision for automated optimization. This post explores the key themes of Ramki’s presentation, highlighting innovative approaches to performance tuning.

Challenges of Microservices Performance

Ramki Ramakrishna opened by outlining the difficulties of tuning microservices in data centers, where numerous parameters and workload variations create combinatorial complexity. Drawing from his work with Twitter’s JVM team, he explained how continuous software and hardware upgrades exacerbate performance issues, often leaving resources underutilized. Ramki’s insights set the stage for exploring machine learning as a solution to these challenges.

Bayesian Optimization in Action

Delving into technical details, Ramki introduced Bayesian optimization, a machine learning approach to automate performance tuning. He described its application in Twitter’s microservices, using tools derived from open-source projects like Spearmint. Ramki shared practical examples, demonstrating how Bayesian methods efficiently explore parameter spaces, outperforming manual tuning in scenarios with many variables, ensuring optimal resource utilization.

Lessons and Pitfalls

Ramki discussed pitfalls encountered during Twitter’s optimization projects, such as the need for expert-defined parameter ranges to guide machine learning algorithms. He highlighted the importance of collaboration between service owners and engineers to specify tuning constraints. His lessons, drawn from real-world implementations, emphasized balancing automation with human expertise to achieve reliable performance improvements.

Vision for Continuous Optimization

Concluding, Ramki outlined a vision for a continuous optimization service, integrating machine learning into DevOps pipelines. He noted plans to open-source parts of Twitter’s solution, building on frameworks like Spearmint. Ramki’s forward-thinking approach inspired developers to adopt data-driven optimization, ensuring microservices remain efficient amidst evolving data center demands.

Links:

PostHeaderIcon [DevoxxUS2017] Java Puzzlers NG S02: Down the Rabbit Hole by Baruch Sadogursky and Viktor Gamov

At DevoxxUS2017, Baruch Sadogursky and Viktor Gamov, from JFrog and Hazelcast respectively, entertained attendees with a lively exploration of Java 8 and 9 puzzlers. Known for their engaging style, Baruch, a Developer Advocate, and Viktor, a Senior Solution Architect, presented complex coding challenges involving streams, lambdas, and Optionals. Their session combined humor, technical depth, and audience interaction, offering valuable lessons for Java developers. This post examines the key themes of their presentation, highlighting strategies to navigate Java’s intricacies.

Decoding Java 8 Complexities

Baruch Sadogursky and Viktor Gamov kicked off with a series of Java 8 puzzlers, focusing on streams and lambdas. They presented scenarios where seemingly simple code led to unexpected outcomes, such as subtle bugs in stream operations. Baruch emphasized the importance of understanding functional programming nuances, using examples to illustrate common pitfalls. Their interactive approach, with audience participation, made complex concepts accessible and engaging.

Navigating Java 9 Features

Transitioning to Java 9, Viktor explored new puzzlers involving modules and CompletableFutures, highlighting how these features introduce fresh challenges. He demonstrated how the module system can lead to compilation errors if misconfigured, urging developers to read documentation carefully. Their examples, drawn from real-world experiences at JFrog and Hazelcast, underscored the need for precision in adopting Java’s evolving features.

Tools for Avoiding Pitfalls

Baruch and Viktor stressed the role of tools like IntelliJ IDEA in catching errors early, noting how its inspections highlight potential issues in lambda and stream usage. They advised against overusing complex constructs, advocating for simplicity to avoid “WTF” moments. Their practical tips, grounded in their extensive conference-speaking experience, encouraged developers to leverage IDEs and documentation to write robust code.

Community Engagement and Resources

Concluding with a call to action, Baruch and Viktor invited developers to contribute puzzlers to JFrog’s puzzlers initiative, fostering community-driven learning. They shared resources, including their slide deck and blog posts, encouraging feedback via Twitter. Their enthusiasm for Java’s challenges inspired attendees to dive deeper into the language’s intricacies, embracing both its power and pitfalls.

Links:

PostHeaderIcon [DevoxxFR] Kill Your Branches, Do Feature Toggles

For many software development teams, managing feature branches in version control can be a source of significant pain and delays, particularly when branches diverge over long periods, leading to complex and time-consuming merge conflicts. Morgan LEROI proposed an alternative strategy: minimize or eliminate long-lived feature branches in favor of using Feature Toggles. His presentation explored the concepts behind feature toggles, their benefits, and shared practical experience on how this approach can streamline development workflows and enable new capabilities like activating features on demand.

Morgan opened by illustrating the common frustration associated with merging branches that have diverged significantly, describing it as a “traumatic experience”. This pain point underscores the need for development practices that reduce the time code spends in isolation before being integrated.

Embracing Feature Toggles

Feature Toggles, also known as Feature Flags, are a technique that allows developers to enable or disable specific features in an application at runtime, without deploying new code. The core idea is to merge code frequently into the main development branch (e.g., main or master), even if features are not yet complete or ready for production release. The incomplete or experimental features are wrapped in toggles that can be controlled externally.

Morgan explained that this approach addresses the merge hell problem by ensuring code is integrated continuously in small increments, minimizing divergence. It also decouples deployment from release; code containing new features can be deployed to production disabled, and the feature can be “released” or activated later via the toggle when ready.

Practical Benefits and Use Cases

Beyond simplifying merging, Feature Toggles offer several tangible benefits. Morgan highlighted their use by major industry players, including Amazon, demonstrating their effectiveness at scale. A key advantage is the ability to activate new features on demand, for specific user groups, or even for individual users. This enables phased rollouts, A/B testing, and easier rollback if a feature proves problematic.

Morgan detailed the application of feature toggles in A/B testing scenarios. By showing different versions of a feature (or the presence/absence of a feature) to different user segments, teams can collect metrics on user behavior and make data-driven decisions about which version is more effective. This allows for continuous experimentation and optimization based on real-world usage. He suggested that even a simple boolean configuration toggle (if (featureIsEnabled) { ... }) can be a starting point. Morgan encouraged developers to consider feature toggles as a powerful tool for improving development flow, reducing merge pain, and gaining flexibility in releasing new functionality. He challenged attendees to reflect on whether their current branching strategy is serving them well and to consider experimenting with feature toggles. Morgan Leroi is a Staff Software Engineer at Algolia.

Hashtags: #FeatureToggles #BranchingStrategy #ContinuousDelivery #DevOps #SoftwareDevelopment #Agile #MorganLEROI #DevoxxFR2017

PostHeaderIcon [DevoxxFR] An Ultrasonic Adventure!

In the quest for novel ways to enable communication between web pages running on different machines without relying on a central server, Hubert SABLONNIERE embarked on a truly unique and fascinating experiment: using ultrasonic sound emitted and received through web browsers. This adventurous project leveraged modern web audio capabilities to explore an unconventional method of initiating a peer-to-peer connection, pushing the boundaries of what’s possible with web technologies.

Hubert’s journey began with a seemingly simple question that led down an unexpected path. The idea was to use audible (or in this case, inaudible) sound as a signaling mechanism to bootstrap a WebRTC connection, a technology that allows direct browser-to-browser communication.

Signaling with Ultrasound

The core concept involved using the Web Audio API to generate audio signals at frequencies beyond the range of human hearing – ultrasounds. These signals would carry encoded information, acting as a handshake or discovery mechanism. A web page on one machine would emit these ultrasonic signals through the computer’s speakers, and a web page on another nearby machine would attempt to detect and decode them using the Web Audio API and the computer’s microphone.

Once the two pages successfully exchanged the necessary information via ultrasound (such as network addresses or session descriptions), they could then establish a direct WebRTC connection for more robust and higher-bandwidth communication. Hubert’s experiment demonstrated the technical feasibility of this imaginative approach, turning computers into acoustic modems for web pages.

Experimentation and Learning

Hubert emphasized that the project was primarily an “adventure” and an excellent vehicle for learning, rather than necessarily a production-ready solution. Building this ultrasonic communication system provided invaluable hands-on experience with several cutting-edge web technologies, specifically the Web Audio API for generating and analyzing audio and the WebRTC API for peer-to-peer networking.

Personal projects like this, free from the constraints and requirements of production environments, offer a unique opportunity to explore unconventional ideas and deepen understanding of underlying technologies. Hubert shared that the experiment, developed over several nights, allowed him to rapidly learn and experiment with WebRTC and Web Audio in a practical context. While the real-world applicability of using ultrasound for web page communication might be limited by factors like ambient noise, distance, and device microphone/speaker capabilities, the project served as a powerful illustration of creative problem-solving and the potential for unexpected uses of web APIs. Hubert made the project code available on GitHub, encouraging others to explore this ultrasonic frontier and potentially build upon his adventurous experimentation.

Hashtags: #WebAudio #WebRTC #Ultrasound #Experimentation #JavaScript #HubertSablonniere #DevoxxFR2017

PostHeaderIcon [DevoxxUS2017] JavaScript: The New Parts by Joshua Wilson

At DevoxxUS2017, Joshua Wilson, a lead UI developer at Red Hat, delivered an illuminating session on ECMAScript 2015 (ES6) and its transformative features for JavaScript development. With a background transitioning from Java to front-end engineering, Joshua guided attendees through ES6’s modern syntax, including arrow functions, modules, and classes. His presentation offered practical insights for developers seeking to enhance their web development skills. This post explores the core themes of Joshua’s talk, focusing on ES6’s impact on coding elegance and productivity.

Embracing ES6 Features

Joshua Wilson began by introducing key ES6 features, such as arrow functions, which simplify syntax for function expressions, and let and const for block-scoped variables, enhancing code clarity. He demonstrated transforming legacy JavaScript into concise ES6 code, emphasizing improved readability. Drawing from his work at Red Hat, Joshua highlighted how these features streamline UI development, making JavaScript more approachable for developers accustomed to Java’s structure.

Modules and Classes for Modern Development

Delving deeper, Joshua explored ES6 modules, which enable better code organization and dependency management, contrasting them with older CommonJS approaches. He showcased ES6 classes, offering a familiar syntax for Java developers to create object-oriented structures. Joshua addressed challenges like handling null and undefined, noting ES6’s limited improvements but suggesting tools like TypeScript for stricter type safety, aligning with his focus on robust front-end solutions.

Practical Applications and Tooling

Joshua emphasized practical applications, demonstrating how ES6 integrates with build tools like Webpack for seamless module handling. He highlighted ES7’s Array.includes method for improved array searching, addressing edge cases like NaN. His insights, grounded in real-world UI projects at Red Hat, encouraged developers to adopt modern JavaScript practices, leveraging transpilers to ensure compatibility while embracing ES6’s expressive power.

Navigating the JavaScript Ecosystem

Concluding, Joshua encouraged developers to explore the evolving JavaScript ecosystem, recommending resources like Mozilla Developer Network (MDN) for learning ES6. His passion for front-end development inspired attendees to experiment with new syntax, enhancing their ability to craft dynamic, user-friendly interfaces with confidence.

Links:

PostHeaderIcon [DevoxxUS2017] Mobycraft: Manage Docker Containers Using Minecraft by Arun Gupta

At DevoxxUS2017, Arun Gupta, Vice President of Developer Advocacy at Couchbase, presented an innovative project called Mobycraft, a Minecraft client-side mod designed to manage Docker containers. Collaborating with his son, Aditya Gupta, Arun showcased how this mod transforms container management into an engaging, game-based experience, particularly for younger audiences learning Java and Docker fundamentals. This post delves into the key aspects of Arun’s session, highlighting how Mobycraft bridges gaming and technology education.

Engaging Young Minds with Docker

Arun Gupta introduced Mobycraft as a creative fusion of Minecraft’s interactive environment and Docker’s container management capabilities. Developed as a father-son project, the mod allows users to execute Docker commands like /docker ps and /docker run within Minecraft. Arun explained how containers are visualized as color-coded boxes, with interactive elements like start/stop buttons and status indicators. This approach, rooted in Aditya’s passion for Minecraft modding, makes complex Docker concepts accessible and fun, fostering early exposure to DevOps principles.

Technical Implementation and Community Contribution

Arun detailed Mobycraft’s technical foundation, built on Minecraft Forge for Minecraft 1.8, using the docker-java library to interface with Docker hosts. The mod supports multiple providers, including local Docker hosts, Docker for Mac, and Netflix’s Titus, leveraging Guice for dependency injection to ensure flexibility. Arun encouraged community contributions through code reviews and pull requests on the GitHub repository, emphasizing its educational potential and inviting developers to enhance features like Swarm visualization.

Links:

PostHeaderIcon [DevoxxFR 2017] Terraform 101: Infrastructure as Code Made Simple

Manually provisioning and managing infrastructure – whether virtual machines, networks, or databases – can be a time-consuming and error-prone process. As applications and their underlying infrastructure become more complex, automating these tasks is essential for efficiency, repeatability, and scalability. Infrastructure as Code (IaC) addresses this by allowing developers and operations teams to define and manage infrastructure using configuration files, applying software development practices like version control, testing, and continuous integration. Terraform, an open-source IaC tool from HashiCorp, has gained significant popularity for its ability to provision infrastructure across various cloud providers and on-premises environments using a declarative language. At Devoxx France 2017, Yannick Lorenzati presented “Terraform 101”, introducing the fundamentals of Terraform and demonstrating how developers can use it to quickly and easily set up the infrastructure they need for development, testing, or demos. His talk provided a practical introduction to IaC with Terraform.

Traditional infrastructure management often involved manual configuration through web consoles or imperative scripts. This approach is prone to inconsistencies, difficult to scale, and lacks transparency and version control. IaC tools like Terraform allow users to define their infrastructure in configuration files using a declarative syntax, specifying the desired state of the environment. Terraform then figures out the necessary steps to achieve that state, automating the provisioning and management process.

Declarative Infrastructure with HashiCorp Configuration Language (HCL)

Yannick Lorenzati introduced the core concept of declarative IaC with Terraform. He would have explained that instead of writing scripts that describe how to set up infrastructure step-by-step (imperative approach), users define what the infrastructure should look like (declarative approach) using HashiCorp Configuration Language (HCL). HCL is a human-readable language designed for creating structured configuration files.

The presentation would have covered the basic building blocks of Terraform configurations written in HCL:

  • Providers: Terraform interacts with various cloud providers (AWS, Azure, Google Cloud, etc.) and other services through providers. Yannick showed how to configure a provider to interact with a specific cloud environment.
  • Resources: Resources are the fundamental units of infrastructure managed by Terraform, such as virtual machines, networks, storage buckets, or databases. He would have demonstrated how to define resources in HCL, specifying their type and desired properties.
  • Variables: Variables allow for parameterizing configurations, making them reusable and adaptable to different environments (development, staging, production). Yannick showed how to define and use variables to avoid hardcoding values in the configuration files.
  • Outputs: Outputs are used to expose important information about the provisioned infrastructure, such as IP addresses or connection strings, which can be used by other parts of an application or by other Terraform configurations.

Yannick Lorenzati emphasized how the declarative nature of HCL simplifies infrastructure management by focusing on the desired end state rather than the steps to get there. He showed how Terraform automatically determines the dependencies between resources and provisions them in the correct order.

Practical Demonstration: From Code to Cloud Infrastructure

The core of the “Terraform 101” talk was a live demonstration showing how a developer can use Terraform to provision infrastructure. Yannick Lorenzati would have guided the audience through writing a simple Terraform configuration file to create a basic infrastructure setup, perhaps including a virtual machine and a network configuration on a cloud provider like AWS (given the mention of AWS Route 53 data source in the transcript).

He would have demonstrated the key Terraform commands:

  • terraform init: Initializes the Terraform working directory and downloads the necessary provider plugins.
  • terraform plan: Generates an execution plan, showing what actions Terraform will take to achieve the desired state without actually making any changes. This step is crucial for reviewing the planned changes before applying them.
  • terraform apply: Executes the plan, provisioning or updating the infrastructure according to the configuration.
  • terraform destroy: Tears down all the infrastructure defined in the configuration, which is particularly useful for cleaning up environments after testing or demos (and saving costs, as mentioned in the transcript).

Yannick showed how Terraform outputs provide useful information after the infrastructure is provisioned. He might have also touched upon using data sources (like the AWS Route 53 data source mentioned) to fetch information about existing infrastructure to be used in the configuration.

The presentation highlighted how integrating Terraform with configuration management tools like Ansible (also mentioned in the description) allows for a complete IaC workflow, where Terraform provisions the infrastructure and Ansible configures the software on it.

Yannick Lorenzati’s “Terraform 101” at Devoxx France 2017 provided a clear and practical introduction to Infrastructure as Code using Terraform. By explaining the fundamental concepts, introducing the HCL syntax, and demonstrating the core workflow with live coding, he empowered developers to start automating their infrastructure provisioning. His talk effectively conveyed how Terraform can save time, improve consistency, and enable developers to quickly set up the environments they need, ultimately making them more productive.

Hashtags: #Terraform #IaC #InfrastructureAsCode #HashiCorp #DevOps #CloudComputing #Automation #YannickLorenzati

PostHeaderIcon [DevoxxUS2017] Lessons Learned from Building Hyper-Scale Cloud Services Using Docker by Boris Scholl

At DevoxxUS2017, Boris Scholl, Vice President of Development for Microservices at Oracle, shared valuable lessons from building hyper-scale cloud services using Docker. With a background in Microsoft’s Service Fabric and Container Service, Boris discussed Oracle’s adoption of Docker, Mesos/Marathon, and Kubernetes for resource-efficient, multi-tenant services. His session offered insights into architecture choices and DevOps best practices, providing a roadmap for scalable cloud development. This post examines the key themes of Boris’s presentation, highlighting practical strategies for modern cloud services.

Adopting Docker for Scalability

Boris Scholl began by outlining Oracle’s shift toward cloud services, leveraging Docker to build scalable, multi-tenant applications. He explained how Docker containers optimize resource consumption, enabling rapid service deployment. Drawing from his experience at Oracle, Boris highlighted the pros of containerization, such as portability, and cons, like the need for robust orchestration, setting the stage for discussing advanced DevOps practices.

Orchestration with Mesos and Kubernetes

Delving into orchestration, Boris discussed Oracle’s use of Mesos/Marathon and Kubernetes to manage containerized services. He shared lessons learned, such as the importance of abstracting container management to avoid platform lock-in. Boris’s examples illustrated how orchestration tools ensure resilience and scalability, enabling Oracle to handle hyper-scale workloads while maintaining service reliability.

DevOps Best Practices for Resilience

Boris emphasized the critical role of DevOps in running “always-on” services. He advocated for governance to manage diverse team contributions, preventing architectural chaos. His insights included automating CI/CD pipelines and prioritizing diagnostics for monitoring. Boris shared a lesson on avoiding over-reliance on specific orchestrators, suggesting abstraction layers to ease transitions between platforms like Mesos and Kubernetes.

Governance and Future-Proofing

Concluding, Boris stressed the importance of governance in distributed systems, drawing from Oracle’s experience in maintaining component versioning and compatibility. He recommended blogging as a way to share microservices insights, referencing his own posts. His practical advice inspired developers to adopt disciplined DevOps practices, ensuring cloud services remain scalable, resilient, and adaptable to future needs.

Links:

PostHeaderIcon [DevoxxUS2017] Running a Successful Open Source Project by Wayne Beaton and Gunnar Wagenknecht

At DevoxxUS2017, Wayne Beaton and Gunnar Wagenknecht, key figures in the Eclipse Foundation and Salesforce respectively, shared their expertise on nurturing successful open-source projects. Wayne, Director of Open Source Projects at Eclipse, and Gunnar, a prolific Eclipse contributor, discussed strategies for building vibrant communities around code. Their session covered licensing, contributor engagement, and intellectual property management, offering actionable advice for open-source leaders. This post explores the core themes of their presentation, emphasizing community-driven success.

Building a Community Around Code

Wayne Beaton opened by emphasizing that an open-source project thrives on its community, not just its code. He discussed the importance of selecting an appropriate license to encourage adoption and contributions. Wayne shared Eclipse Foundation’s practices, such as electronic contributor agreements, to streamline participation. His insights, drawn from decades of open-source involvement, highlighted the need for clear communication to attract users, adopters, and developers.

Engaging Contributors and Managing Contributions

Gunnar Wagenknecht focused on fostering contributor engagement, drawing from his experience at Salesforce and Eclipse. He advocated for tools like GitHub to monitor contributions and track project health. Gunnar emphasized creating welcoming environments for new contributors, sharing examples of Eclipse’s infrastructure for managing intellectual property and community feedback. His practical tips encouraged project leaders to prioritize inclusivity and transparency.

Navigating Intellectual Property and Foundations

Wayne and Gunnar explored the complexities of intellectual property management, including trademarks and contributor agreements. They discussed the benefits of affiliating with a foundation like Eclipse, which provides governance and infrastructure support. Comparing Eclipse’s processes with those of Apache and Oracle, they highlighted how foundations simplify legal and operational challenges, enabling projects to focus on innovation.

Tools and Practices for Sustainability

Concluding, Wayne and Gunnar recommended tools for monitoring contributions, such as dashboards used by companies like Microsoft. They emphasized the importance of governance to prevent “anarchy” in multi-team projects. Their insights, grounded in real-world experiences, inspired attendees to adopt structured yet flexible approaches to sustain open-source projects, leveraging community-driven innovation for long-term success.

Links:

PostHeaderIcon [DevoxxUS2017] 55 New Features in JDK 9: A Comprehensive Overview

At DevoxxUS2017, Simon Ritter, Deputy CTO at Azul Systems, delivered a detailed exploration of the 55 new features in JDK 9, with a particular focus on modularity through Project Jigsaw. Simon, a veteran Java evangelist, provided a whirlwind tour of the enhancements, categorizing them into features, standards, JVM internals, specialized updates, and housekeeping changes. His presentation equipped developers with the knowledge to leverage JDK 9’s advancements effectively. This post examines the key themes of Simon’s talk, highlighting how these features enhance Java’s flexibility, performance, and maintainability.

Modularity and Project Jigsaw

The cornerstone of JDK 9 is Project Jigsaw, which introduces modularity to the Java platform. Simon explained that the traditional rt.jar file, containing over 4,500 classes, has been replaced with 94 modular components in the jmods directory. This restructuring encapsulates private APIs, such as sun.misc.Unsafe, to improve security and maintainability, though it poses compatibility challenges for libraries relying on these APIs. To mitigate this, Simon highlighted options like the --add-exports and --add-opens flags, as well as a “big kill switch” (--permit-illegal-access) to disable modularity for legacy applications. The jlink tool further enhances modularity by creating custom runtimes with only the necessary modules, optimizing deployment for specific applications.

Enhanced APIs and Developer Productivity

JDK 9 introduces several API improvements to streamline development. Simon showcased factory methods for collections, allowing developers to create immutable collections with concise syntax, such as List.of() or Set.of(). The Streams API has been enhanced with methods like takeWhile, dropWhile, and ofNullable, improving expressiveness in data processing. Additionally, the introduction of jshell, an interactive REPL, enables rapid prototyping and experimentation. These enhancements reduce boilerplate code and enhance developer productivity, making Java more intuitive and efficient for modern application development.

JVM Internals and Performance

Simon delved into JVM enhancements, including improvements to the G1 garbage collector, which is now the default in JDK 9. The G1 collector offers better performance for large heaps, addressing limitations of the Concurrent Mark Sweep collector. Other internal improvements include a new process API for accessing operating system process details and a directive file for controlling JIT compiler behavior. These changes enhance runtime efficiency and provide developers with greater control over JVM performance, ensuring Java remains competitive for high-performance applications.

Housekeeping and Deprecations

JDK 9 includes significant housekeeping changes to streamline the platform. Simon highlighted the new version string format, adopting semantic versioning (major.minor.security.patch) for clearer identification. The directory structure has been flattened, eliminating the JRE subdirectory and tools.jar, with configuration files centralized in the conf directory. Deprecated APIs, such as the applet API and certain garbage collection options, have been removed to reduce maintenance overhead. These changes simplify the JDK’s structure, improving maintainability while requiring developers to test applications for compatibility.

Standards and Specialized Features

Simon also covered updates to standards and specialized features. The HTTP/2 client, introduced as an incubator module, allows developers to test and provide feedback before it becomes standard. Other standards updates include support for Unicode 8.0 and the deprecation of SHA-1 certificates for enhanced security. Specialized features, such as the annotations pipeline and parser API, improve the handling of complex annotations and programmatic interactions with the compiler. These updates ensure Java aligns with modern standards while offering flexibility for specialized use cases.

Links: