Archive for the ‘General’ Category
[DevoxxUS2017] JavaScript: The New Parts by Joshua Wilson
At DevoxxUS2017, Joshua Wilson, a lead UI developer at Red Hat, delivered an illuminating session on ECMAScript 2015 (ES6) and its transformative features for JavaScript development. With a background transitioning from Java to front-end engineering, Joshua guided attendees through ES6’s modern syntax, including arrow functions, modules, and classes. His presentation offered practical insights for developers seeking to enhance their web development skills. This post explores the core themes of Joshua’s talk, focusing on ES6’s impact on coding elegance and productivity.
Embracing ES6 Features
Joshua Wilson began by introducing key ES6 features, such as arrow functions, which simplify syntax for function expressions, and let and const for block-scoped variables, enhancing code clarity. He demonstrated transforming legacy JavaScript into concise ES6 code, emphasizing improved readability. Drawing from his work at Red Hat, Joshua highlighted how these features streamline UI development, making JavaScript more approachable for developers accustomed to Java’s structure.
Modules and Classes for Modern Development
Delving deeper, Joshua explored ES6 modules, which enable better code organization and dependency management, contrasting them with older CommonJS approaches. He showcased ES6 classes, offering a familiar syntax for Java developers to create object-oriented structures. Joshua addressed challenges like handling null and undefined, noting ES6’s limited improvements but suggesting tools like TypeScript for stricter type safety, aligning with his focus on robust front-end solutions.
Practical Applications and Tooling
Joshua emphasized practical applications, demonstrating how ES6 integrates with build tools like Webpack for seamless module handling. He highlighted ES7’s Array.includes method for improved array searching, addressing edge cases like NaN. His insights, grounded in real-world UI projects at Red Hat, encouraged developers to adopt modern JavaScript practices, leveraging transpilers to ensure compatibility while embracing ES6’s expressive power.
Navigating the JavaScript Ecosystem
Concluding, Joshua encouraged developers to explore the evolving JavaScript ecosystem, recommending resources like Mozilla Developer Network (MDN) for learning ES6. His passion for front-end development inspired attendees to experiment with new syntax, enhancing their ability to craft dynamic, user-friendly interfaces with confidence.
Links:
[DevoxxUS2017] Mobycraft: Manage Docker Containers Using Minecraft by Arun Gupta
At DevoxxUS2017, Arun Gupta, Vice President of Developer Advocacy at Couchbase, presented an innovative project called Mobycraft, a Minecraft client-side mod designed to manage Docker containers. Collaborating with his son, Aditya Gupta, Arun showcased how this mod transforms container management into an engaging, game-based experience, particularly for younger audiences learning Java and Docker fundamentals. This post delves into the key aspects of Arun’s session, highlighting how Mobycraft bridges gaming and technology education.
Engaging Young Minds with Docker
Arun Gupta introduced Mobycraft as a creative fusion of Minecraft’s interactive environment and Docker’s container management capabilities. Developed as a father-son project, the mod allows users to execute Docker commands like /docker ps and /docker run within Minecraft. Arun explained how containers are visualized as color-coded boxes, with interactive elements like start/stop buttons and status indicators. This approach, rooted in Aditya’s passion for Minecraft modding, makes complex Docker concepts accessible and fun, fostering early exposure to DevOps principles.
Technical Implementation and Community Contribution
Arun detailed Mobycraft’s technical foundation, built on Minecraft Forge for Minecraft 1.8, using the docker-java library to interface with Docker hosts. The mod supports multiple providers, including local Docker hosts, Docker for Mac, and Netflix’s Titus, leveraging Guice for dependency injection to ensure flexibility. Arun encouraged community contributions through code reviews and pull requests on the GitHub repository, emphasizing its educational potential and inviting developers to enhance features like Swarm visualization.
Links:
[KotlinConf2017] Highlights
Lecturer
The KotlinConf 2017 Highlights presentation features contributions from multiple speakers, including Maxim Shafirov, Andrey Breslav, Dmitry Jemerov, and Stephanie Cuthbertson. Maxim Shafirov serves as the CEO of JetBrains, the company behind Kotlin’s development, with a extensive background in software tools and IDEs. Andrey Breslav, the lead designer of Kotlin, has been instrumental in shaping the language’s pragmatic approach to JVM-based development. Dmitry Jemerov, a senior developer at JetBrains, contributes to Kotlin’s technical advancements. Stephanie Cuthbertson, associated with Android’s adoption of Kotlin, brings expertise in mobile development ecosystems. Their collective efforts underscore JetBrains’ commitment to fostering innovative programming solutions.
Abstract
The inaugural KotlinConf 2017, held in San Francisco from November 1–3, 2017, marked a significant milestone for the Kotlin programming language, celebrating its rapid adoption and community growth. This article analyzes the key themes presented in the conference highlights, emphasizing Kotlin’s rise as a modern, production-ready language for Android and beyond. It explores the context of Kotlin’s adoption, the community’s enthusiasm, and the strategic vision for its future, driven by JetBrains and supported by industry partners. The implications of Kotlin’s growing ecosystem, from startups to Fortune 500 companies, are examined, highlighting its role in enhancing developer productivity and code quality.
Context of KotlinConf 2017
KotlinConf 2017 emerged as the first dedicated conference for Kotlin, a language developed by JetBrains to address Java’s limitations while maintaining strong interoperability with the JVM. The event, which sold out with 1,200 attendees, reflected Kotlin’s surging popularity, particularly after Google’s announcement of first-class support for Kotlin on Android earlier that year. The conference featured over 150 talk submissions from 110 speakers, necessitating an additional track to accommodate the demand. This context underscores Kotlin’s appeal as a concise, readable, and modern language, appealing to developers across mobile, server-side, and functional programming domains.
The enthusiasm at KotlinConf was palpable, with Maxim noting the vibrant community discussions and the colorful atmosphere of the event’s social gatherings. The involvement of partners like Trifork and the presence of a program committee ensured a high-quality selection of talks, fostering a collaborative environment. Kotlin’s adoption by 17% of Android projects at the time, coupled with its use in both startups and Fortune 500 companies, highlighted its versatility and production-readiness, setting the stage for the conference’s focus on innovation and community-driven growth.
Community and Ecosystem Growth
A key theme of KotlinConf 2017 was the rapid expansion of Kotlin’s community and ecosystem. The conference showcased the language’s appeal to developers seeking a modern alternative to Java. Speakers emphasized Kotlin’s readability and ease of onboarding, which allowed teams to adopt it swiftly. The compiler’s ability to handle complex type inference and error checking was highlighted as a significant advantage, enabling developers to focus on business logic rather than boilerplate code. This focus on developer experience resonated with attendees, many of whom were already coding in Kotlin or exploring its potential for Android and server-side applications.
The event also highlighted the community’s role in driving Kotlin’s evolution. Discussions with contributors from Gradle, Spring, and other technologies underscored collaborative efforts to enhance Kotlin’s interoperability and tooling. The conference’s success, with its diverse speaker lineup and vibrant social events, fostered a sense of shared purpose, encouraging developers to contribute to Kotlin’s open-source ecosystem. This community-driven approach was pivotal in positioning Kotlin as a language that balances innovation with practicality, appealing to both individual developers and large organizations.
Strategic Vision for Kotlin
The keynote speakers outlined a forward-looking vision for Kotlin, emphasizing its potential to unify development across platforms. Maxim and Andrey highlighted plans to expand Kotlin’s multiplatform capabilities, particularly for native and iOS development, through initiatives like common native technology previews. These efforts aimed to provide shared libraries for I/O, networking, and serialization, enabling developers to write platform-agnostic code. The focus on backward compatibility, even for experimental features, reassured developers of Kotlin’s stability, encouraging adoption in production environments.
The conference also addressed practical challenges, such as bug reporting and session accessibility. The provision of office hours and voting mechanisms ensured attendee feedback could shape Kotlin’s future. The acknowledgment of minor issues, like an iOS app bug, demonstrated JetBrains’ commitment to transparency and iterative improvement. This strategic vision, combining technical innovation with community engagement, positioned Kotlin as a language poised for long-term growth and influence in the software development landscape.
Implications for Developers and Industry
KotlinConf 2017 underscored Kotlin’s transformative impact on software development. Its adoption by major companies and startups alike highlighted its ability to deliver high-quality, maintainable code. The conference’s emphasis on Android integration reflected Kotlin’s role in simplifying mobile development, reducing complexity in areas like UI design and asynchronous programming. Beyond Android, Kotlin’s applicability to server-side and functional programming broadened its appeal, offering a versatile tool for diverse use cases.
For developers, KotlinConf provided a platform to learn from industry leaders and share best practices, fostering a collaborative ecosystem. The promise of recorded sessions ensured accessibility, extending the conference’s reach to a global audience. For the industry, Kotlin’s growth signaled a shift toward modern, developer-friendly languages, challenging Java’s dominance while leveraging its ecosystem. The conference’s success set a precedent for future events, reinforcing Kotlin’s role as a catalyst for innovation in software engineering.
Conclusion
KotlinConf 2017 marked a pivotal moment for Kotlin, celebrating its rapid adoption and vibrant community. By showcasing its technical strengths, community-driven growth, and strategic vision, the conference positioned Kotlin as a leading language for modern development. The emphasis on readability, interoperability, and multiplatform potential highlighted Kotlin’s ability to address diverse programming needs. As JetBrains and its community continue to innovate, KotlinConf 2017 remains a landmark event, demonstrating the language’s transformative potential and setting the stage for its enduring impact.
Links
[DevoxxFR 2017] Terraform 101: Infrastructure as Code Made Simple
Manually provisioning and managing infrastructure – whether virtual machines, networks, or databases – can be a time-consuming and error-prone process. As applications and their underlying infrastructure become more complex, automating these tasks is essential for efficiency, repeatability, and scalability. Infrastructure as Code (IaC) addresses this by allowing developers and operations teams to define and manage infrastructure using configuration files, applying software development practices like version control, testing, and continuous integration. Terraform, an open-source IaC tool from HashiCorp, has gained significant popularity for its ability to provision infrastructure across various cloud providers and on-premises environments using a declarative language. At Devoxx France 2017, Yannick Lorenzati presented “Terraform 101”, introducing the fundamentals of Terraform and demonstrating how developers can use it to quickly and easily set up the infrastructure they need for development, testing, or demos. His talk provided a practical introduction to IaC with Terraform.
Traditional infrastructure management often involved manual configuration through web consoles or imperative scripts. This approach is prone to inconsistencies, difficult to scale, and lacks transparency and version control. IaC tools like Terraform allow users to define their infrastructure in configuration files using a declarative syntax, specifying the desired state of the environment. Terraform then figures out the necessary steps to achieve that state, automating the provisioning and management process.
Declarative Infrastructure with HashiCorp Configuration Language (HCL)
Yannick Lorenzati introduced the core concept of declarative IaC with Terraform. He would have explained that instead of writing scripts that describe how to set up infrastructure step-by-step (imperative approach), users define what the infrastructure should look like (declarative approach) using HashiCorp Configuration Language (HCL). HCL is a human-readable language designed for creating structured configuration files.
The presentation would have covered the basic building blocks of Terraform configurations written in HCL:
- Providers: Terraform interacts with various cloud providers (AWS, Azure, Google Cloud, etc.) and other services through providers. Yannick showed how to configure a provider to interact with a specific cloud environment.
- Resources: Resources are the fundamental units of infrastructure managed by Terraform, such as virtual machines, networks, storage buckets, or databases. He would have demonstrated how to define resources in HCL, specifying their type and desired properties.
- Variables: Variables allow for parameterizing configurations, making them reusable and adaptable to different environments (development, staging, production). Yannick showed how to define and use variables to avoid hardcoding values in the configuration files.
- Outputs: Outputs are used to expose important information about the provisioned infrastructure, such as IP addresses or connection strings, which can be used by other parts of an application or by other Terraform configurations.
Yannick Lorenzati emphasized how the declarative nature of HCL simplifies infrastructure management by focusing on the desired end state rather than the steps to get there. He showed how Terraform automatically determines the dependencies between resources and provisions them in the correct order.
Practical Demonstration: From Code to Cloud Infrastructure
The core of the “Terraform 101” talk was a live demonstration showing how a developer can use Terraform to provision infrastructure. Yannick Lorenzati would have guided the audience through writing a simple Terraform configuration file to create a basic infrastructure setup, perhaps including a virtual machine and a network configuration on a cloud provider like AWS (given the mention of AWS Route 53 data source in the transcript).
He would have demonstrated the key Terraform commands:
terraform init: Initializes the Terraform working directory and downloads the necessary provider plugins.terraform plan: Generates an execution plan, showing what actions Terraform will take to achieve the desired state without actually making any changes. This step is crucial for reviewing the planned changes before applying them.terraform apply: Executes the plan, provisioning or updating the infrastructure according to the configuration.terraform destroy: Tears down all the infrastructure defined in the configuration, which is particularly useful for cleaning up environments after testing or demos (and saving costs, as mentioned in the transcript).
Yannick showed how Terraform outputs provide useful information after the infrastructure is provisioned. He might have also touched upon using data sources (like the AWS Route 53 data source mentioned) to fetch information about existing infrastructure to be used in the configuration.
The presentation highlighted how integrating Terraform with configuration management tools like Ansible (also mentioned in the description) allows for a complete IaC workflow, where Terraform provisions the infrastructure and Ansible configures the software on it.
Yannick Lorenzati’s “Terraform 101” at Devoxx France 2017 provided a clear and practical introduction to Infrastructure as Code using Terraform. By explaining the fundamental concepts, introducing the HCL syntax, and demonstrating the core workflow with live coding, he empowered developers to start automating their infrastructure provisioning. His talk effectively conveyed how Terraform can save time, improve consistency, and enable developers to quickly set up the environments they need, ultimately making them more productive.
Links:
- HashiCorp: https://www.hashicorp.com/
Hashtags: #Terraform #IaC #InfrastructureAsCode #HashiCorp #DevOps #CloudComputing #Automation #YannickLorenzati
[DevoxxUS2017] Lessons Learned from Building Hyper-Scale Cloud Services Using Docker by Boris Scholl
At DevoxxUS2017, Boris Scholl, Vice President of Development for Microservices at Oracle, shared valuable lessons from building hyper-scale cloud services using Docker. With a background in Microsoft’s Service Fabric and Container Service, Boris discussed Oracle’s adoption of Docker, Mesos/Marathon, and Kubernetes for resource-efficient, multi-tenant services. His session offered insights into architecture choices and DevOps best practices, providing a roadmap for scalable cloud development. This post examines the key themes of Boris’s presentation, highlighting practical strategies for modern cloud services.
Adopting Docker for Scalability
Boris Scholl began by outlining Oracle’s shift toward cloud services, leveraging Docker to build scalable, multi-tenant applications. He explained how Docker containers optimize resource consumption, enabling rapid service deployment. Drawing from his experience at Oracle, Boris highlighted the pros of containerization, such as portability, and cons, like the need for robust orchestration, setting the stage for discussing advanced DevOps practices.
Orchestration with Mesos and Kubernetes
Delving into orchestration, Boris discussed Oracle’s use of Mesos/Marathon and Kubernetes to manage containerized services. He shared lessons learned, such as the importance of abstracting container management to avoid platform lock-in. Boris’s examples illustrated how orchestration tools ensure resilience and scalability, enabling Oracle to handle hyper-scale workloads while maintaining service reliability.
DevOps Best Practices for Resilience
Boris emphasized the critical role of DevOps in running “always-on” services. He advocated for governance to manage diverse team contributions, preventing architectural chaos. His insights included automating CI/CD pipelines and prioritizing diagnostics for monitoring. Boris shared a lesson on avoiding over-reliance on specific orchestrators, suggesting abstraction layers to ease transitions between platforms like Mesos and Kubernetes.
Governance and Future-Proofing
Concluding, Boris stressed the importance of governance in distributed systems, drawing from Oracle’s experience in maintaining component versioning and compatibility. He recommended blogging as a way to share microservices insights, referencing his own posts. His practical advice inspired developers to adopt disciplined DevOps practices, ensuring cloud services remain scalable, resilient, and adaptable to future needs.
Links:
[DevoxxUS2017] Running a Successful Open Source Project by Wayne Beaton and Gunnar Wagenknecht
At DevoxxUS2017, Wayne Beaton and Gunnar Wagenknecht, key figures in the Eclipse Foundation and Salesforce respectively, shared their expertise on nurturing successful open-source projects. Wayne, Director of Open Source Projects at Eclipse, and Gunnar, a prolific Eclipse contributor, discussed strategies for building vibrant communities around code. Their session covered licensing, contributor engagement, and intellectual property management, offering actionable advice for open-source leaders. This post explores the core themes of their presentation, emphasizing community-driven success.
Building a Community Around Code
Wayne Beaton opened by emphasizing that an open-source project thrives on its community, not just its code. He discussed the importance of selecting an appropriate license to encourage adoption and contributions. Wayne shared Eclipse Foundation’s practices, such as electronic contributor agreements, to streamline participation. His insights, drawn from decades of open-source involvement, highlighted the need for clear communication to attract users, adopters, and developers.
Engaging Contributors and Managing Contributions
Gunnar Wagenknecht focused on fostering contributor engagement, drawing from his experience at Salesforce and Eclipse. He advocated for tools like GitHub to monitor contributions and track project health. Gunnar emphasized creating welcoming environments for new contributors, sharing examples of Eclipse’s infrastructure for managing intellectual property and community feedback. His practical tips encouraged project leaders to prioritize inclusivity and transparency.
Navigating Intellectual Property and Foundations
Wayne and Gunnar explored the complexities of intellectual property management, including trademarks and contributor agreements. They discussed the benefits of affiliating with a foundation like Eclipse, which provides governance and infrastructure support. Comparing Eclipse’s processes with those of Apache and Oracle, they highlighted how foundations simplify legal and operational challenges, enabling projects to focus on innovation.
Tools and Practices for Sustainability
Concluding, Wayne and Gunnar recommended tools for monitoring contributions, such as dashboards used by companies like Microsoft. They emphasized the importance of governance to prevent “anarchy” in multi-team projects. Their insights, grounded in real-world experiences, inspired attendees to adopt structured yet flexible approaches to sustain open-source projects, leveraging community-driven innovation for long-term success.
Links:
[DotSecurity2017] Secure Software Development Lifecycle
In the forge of functional fortification, where code coalesces into capabilities, embedding security sans sacrificing swiftness stands as the alchemist’s art. Jim Manico, founder of Manicode Security and erstwhile OWASP steward, alchemized this axiom at dotSecurity 2017, furnishing frameworks for fortifying the software development lifecycle (SDLC) from inception to iteration. A Hawaiian hui of secure coding savant, Jim’s odyssey—from Siena’s scrolls to Edgescan’s enterprise—equips his edicts with empirical edge, transforming tedious tenets into tactical triumphs that temper expense through early engagement.
Jim’s jaunt journeyed SDLC’s stations: analysis’s augury (requirements’ rigor, threats’ taxonomy), design’s delineation (architectural audits, data flow diagrams), coding’s crucible (checklists’ chisel, libraries’ ledger), testing’s tribunal (static sentinels, dynamic drills), operations’ observatory (monitoring’s mantle, incident’s inquest). Agile’s alacrity or waterfall’s wash notwithstanding, phases persist—analysis’s abstraction a month or minute, testing’s tenacity from triage to telemetry. Jim jabbed at jargon: process’s pallor palls without practicality—checklists conquer compendiums, triage trumps torrent.
Requirements’ realm reigns: OWASP’s taxonomy as talisman—access’s armature, injection’s inveiglement—blueprints birthing bug bounties. Design’s domain: threat modeling’s mosaic (STRIDE’s strata: spoofing’s specter to tampering’s thorn), data’s diagram (flows fortified, endpoints etched). Coding’s canon: Manicode’s missives—input’s inquisition (sanitization’s sieve), output’s oracle (encoding’s aegis)—libraries’ litany (npm’s audit, Snyk’s scrutiny). Testing’s tier: static’s scalpel (SonarQube’s scan, Coverity’s critique—rules’ rationing for relevance), dynamic’s delve (DAST’s dart, IAST’s insight). Operations’ oversight: logging’s ledger (anomalies’ alert), patching’s patrol (vulnerabilities’ vigil).
Jim’s jeremiad: late lamentations lavish lucre—early excision economizes, triage tempers toil. Static’s sacrament: compilers’ cognizance, rules’ refinement—devops’ deployment, developers’ deliverance from deluge.
SDLC’s Stations and Security’s Scaffold
Jim mapped milestones: analysis’s augury, design’s diagram—coding’s checklist, testing’s tier. Operations’ observatory: monitoring’s mantle, incident’s inquest.
Tenets’ Triumph and Tools’ Temperance
OWASP’s oracle, threat’s taxonomy—static’s scalpel, dynamic’s delve. Jim’s jewel: early’s economy, triage’s temperance—checklists conquer, compendiums crumble.
Links:
[ScalaDaysNewYork2016] The Zen of Akka: Mastering Asynchronous Design
At Scala Days New York 2016, Konrad Malawski, a key member of the Akka team at Lightbend, delivered a profound exploration of the principles guiding the effective use of Akka, a toolkit for building concurrent and distributed systems. Konrad’s presentation, inspired by the philosophical lens of “The Tao of Programming,” offered practical insights into designing applications with Akka, emphasizing the shift from synchronous to asynchronous paradigms to achieve robust, scalable architectures.
Embracing the Messaging Paradigm
Konrad Malawski began by underscoring the centrality of messaging in Akka’s actor model. Drawing from Alan Kay’s vision of object-oriented programming, Konrad explained that actors encapsulate state and communicate solely through messages, mirroring real-world computing interactions. This approach fosters loose coupling, both spatially and temporally, allowing components to operate independently. A single actor, Konrad noted, is limited in utility, but when multiple actors collaborate—such as delegating tasks to specialized actors like a “yellow specialist”—powerful patterns like worker pools and sharding emerge. These patterns enable efficient workload distribution, aligning perfectly with the distributed nature of modern systems.
Structuring Actor Systems for Clarity
A common pitfall for newcomers to Akka, Konrad observed, is creating unstructured systems with actors communicating chaotically. To counter this, he advocated for hierarchical actor systems using context.actorOf to spawn child actors, ensuring a clear supervisory structure. This hierarchy not only organizes actors but also enhances fault tolerance through supervision, where parent actors manage failures of their children. Konrad cautioned against actor selection—directly addressing actors by path—as it leads to brittle designs akin to “stealing a TV from a stranger’s house.” Instead, actors should be introduced through proper references, fostering maintainable and predictable interactions.
Balancing Power and Constraints
Konrad emphasized the philosophy of “constraints liberate, liberties constrain,” a principle echoed across Scala conferences. Akka actors, being highly flexible, can perform a wide range of tasks, but this power can overwhelm developers. He contrasted actors with more constrained abstractions like futures, which handle single values, and Akka Streams, which enforce a static data flow. These constraints enable optimizations, such as transparent backpressure in streams, which are harder to implement in the dynamic actor model. However, actors excel in distributed settings, where messaging simplifies scaling across nodes, making Akka a versatile choice for complex systems.
Community and Future Directions
Konrad highlighted the vibrant Akka community, encouraging contributions through platforms like GitHub and Gitter. He noted ongoing developments, such as Akka Typed, an experimental API that enhances type safety in actor interactions. By sharing resources like the Reactive Streams TCK and community-driven initiatives, Konrad underscored Lightbend’s commitment to evolving Akka collaboratively. His call to action was clear: engage with the community, experiment with new features, and contribute to shaping Akka’s future, ensuring it remains a cornerstone of reactive programming.
Links:
[DevoxxUS2017] 55 New Features in JDK 9: A Comprehensive Overview
At DevoxxUS2017, Simon Ritter, Deputy CTO at Azul Systems, delivered a detailed exploration of the 55 new features in JDK 9, with a particular focus on modularity through Project Jigsaw. Simon, a veteran Java evangelist, provided a whirlwind tour of the enhancements, categorizing them into features, standards, JVM internals, specialized updates, and housekeeping changes. His presentation equipped developers with the knowledge to leverage JDK 9’s advancements effectively. This post examines the key themes of Simon’s talk, highlighting how these features enhance Java’s flexibility, performance, and maintainability.
Modularity and Project Jigsaw
The cornerstone of JDK 9 is Project Jigsaw, which introduces modularity to the Java platform. Simon explained that the traditional rt.jar file, containing over 4,500 classes, has been replaced with 94 modular components in the jmods directory. This restructuring encapsulates private APIs, such as sun.misc.Unsafe, to improve security and maintainability, though it poses compatibility challenges for libraries relying on these APIs. To mitigate this, Simon highlighted options like the --add-exports and --add-opens flags, as well as a “big kill switch” (--permit-illegal-access) to disable modularity for legacy applications. The jlink tool further enhances modularity by creating custom runtimes with only the necessary modules, optimizing deployment for specific applications.
Enhanced APIs and Developer Productivity
JDK 9 introduces several API improvements to streamline development. Simon showcased factory methods for collections, allowing developers to create immutable collections with concise syntax, such as List.of() or Set.of(). The Streams API has been enhanced with methods like takeWhile, dropWhile, and ofNullable, improving expressiveness in data processing. Additionally, the introduction of jshell, an interactive REPL, enables rapid prototyping and experimentation. These enhancements reduce boilerplate code and enhance developer productivity, making Java more intuitive and efficient for modern application development.
JVM Internals and Performance
Simon delved into JVM enhancements, including improvements to the G1 garbage collector, which is now the default in JDK 9. The G1 collector offers better performance for large heaps, addressing limitations of the Concurrent Mark Sweep collector. Other internal improvements include a new process API for accessing operating system process details and a directive file for controlling JIT compiler behavior. These changes enhance runtime efficiency and provide developers with greater control over JVM performance, ensuring Java remains competitive for high-performance applications.
Housekeeping and Deprecations
JDK 9 includes significant housekeeping changes to streamline the platform. Simon highlighted the new version string format, adopting semantic versioning (major.minor.security.patch) for clearer identification. The directory structure has been flattened, eliminating the JRE subdirectory and tools.jar, with configuration files centralized in the conf directory. Deprecated APIs, such as the applet API and certain garbage collection options, have been removed to reduce maintenance overhead. These changes simplify the JDK’s structure, improving maintainability while requiring developers to test applications for compatibility.
Standards and Specialized Features
Simon also covered updates to standards and specialized features. The HTTP/2 client, introduced as an incubator module, allows developers to test and provide feedback before it becomes standard. Other standards updates include support for Unicode 8.0 and the deprecation of SHA-1 certificates for enhanced security. Specialized features, such as the annotations pipeline and parser API, improve the handling of complex annotations and programmatic interactions with the compiler. These updates ensure Java aligns with modern standards while offering flexibility for specialized use cases.
Links:
[ScalaDaysNewYork2016] Monitoring Reactive Applications: New Approaches for a New Paradigm
Reactive applications, built on event-driven and asynchronous foundations, require innovative monitoring strategies. At Scala Days New York 2016, Duncan DeVore and Henrik Engström, both from Lightbend, explored the challenges and solutions for monitoring such systems. They discussed how traditional monitoring falls short for reactive architectures and introduced Lightbend’s approach to addressing these challenges, emphasizing adaptability and precision in observing distributed systems.
The Shift from Traditional Monitoring
Duncan and Henrik began by outlining the limitations of traditional monitoring, which relies on stack traces in synchronous systems to diagnose issues. In reactive applications, built with frameworks like Akka and Play, the asynchronous, message-driven nature disrupts this model. Stack traces lose relevance, as actors communicate without a direct call stack. The speakers categorized monitoring into business process, functional, and technical types, highlighting the need to track metrics like actor counts, message flows, and system performance in distributed environments.
The Impact of Distributed Systems
The rise of the internet and cloud computing has transformed system design, as Duncan explained. Distributed computing, pioneered by initiatives like ARPANET, and the economic advantages of cloud platforms have enabled businesses to scale rapidly. However, this shift introduces complexities, such as network partitions and variable workloads, necessitating new monitoring approaches. Henrik noted that reactive systems, designed for scalability and resilience, require tools that can handle dynamic data flows and provide insights into system behavior without relying on traditional metrics.
Challenges in Monitoring Reactive Systems
Henrik detailed the difficulties of monitoring asynchronous systems, where data flows through push or pull models. In push-based systems, monitoring tools must handle high data volumes, risking overload, while pull-based systems allow selective querying for efficiency. The speakers emphasized anomaly detection over static thresholds, as thresholds are hard to calibrate and may miss nuanced issues. Anomaly detection, exemplified by tools like Prometheus, identifies unusual patterns by correlating metrics, reducing false alerts and enhancing system understanding.
Lightbend’s Monitoring Solution
Duncan and Henrik introduced Lightbend Monitoring, a subscription-based tool tailored for reactive applications. It integrates with Akka actors and Lagom circuit breakers, generating metrics and traces for backends like StatsD and Telegraf. The solution supports pull-based monitoring, allowing selective data collection to manage high data volumes. Future enhancements include support for distributed tracing, Prometheus integration, and improved Lagom compatibility, aiming to provide a comprehensive view of system health and performance.