Recent Posts
Archives

Archive for the ‘en-US’ Category

PostHeaderIcon [KotlinConf2017] Introduction to Coroutines

Lecturer

Roman Elizarov is a distinguished software developer with over 16 years of experience, currently serving as a senior engineer at JetBrains, where he has been a key contributor to Kotlin’s development. Previously, Roman worked at Devexperts, designing high-performance trading software and market data delivery systems capable of processing millions of events per second. His expertise in Java, JVM, and real-time data processing has shaped Kotlin’s coroutine framework, making him a leading authority on asynchronous programming. Roman’s contributions to Kotlin’s open-source ecosystem and his focus on performance optimizations underscore his impact on modern software development.

Abstract

Kotlin’s introduction of coroutines as a first-class language feature addresses the challenges of asynchronous programming in modern applications. This article analyzes Roman Elizarov’s presentation at KotlinConf 2017, which provides a comprehensive introduction to Kotlin coroutines, distinguishing them from thread-based concurrency and other coroutine implementations like C#’s async/await. It explores the context of asynchronous programming, the methodology behind coroutines, their practical applications, and their implications for scalable software development. The analysis highlights how coroutines simplify asynchronous code, enhance scalability, and integrate with existing Java libraries, offering a robust solution for handling concurrent tasks.

Context of Asynchronous Programming

The rise of asynchronous programming reflects the demands of modern applications, from real-time mobile interfaces to server-side systems handling thousands of users. Roman Elizarov, speaking at KotlinConf 2017 in San Francisco, addressed this shift, noting the limitations of traditional thread-based concurrency in monolithic applications. Threads, while effective for certain tasks, introduce complexity and resource overhead, particularly in high-concurrency scenarios like microservices or real-time data processing. Kotlin, designed by JetBrains for JVM interoperability, offers a pragmatic alternative through coroutines, a first-class language feature distinct from other implementations like Quasar or JavaFlow.

Roman contextualized coroutines within Kotlin’s ecosystem, emphasizing their role in simplifying asynchronous programming. Unlike callback-based approaches, which lead to “callback hell,” or reactive streams, which require complex chaining, coroutines enable synchronous-like code that is both readable and scalable. The presentation’s focus on live examples demonstrated Kotlin’s ability to handle concurrent actions—such as user connections, animations, or server requests—while maintaining performance and developer productivity, setting the stage for a deeper exploration of their mechanics.

Methodology of Kotlin Coroutines

Roman’s presentation detailed the mechanics of Kotlin coroutines, focusing on their core components: suspending functions and coroutine builders. Suspending functions allow developers to write asynchronous code that appears synchronous, pausing execution without blocking threads. This is achieved through Kotlin’s compiler, which transforms suspending functions into state machines, preserving execution state without the overhead of thread context switching. Roman demonstrated launching coroutines using builders like launch and async, which initiate concurrent tasks and allow waiting for their completion, streamlining complex workflows.

A key aspect of the methodology is wrapping existing Java asynchronous libraries into suspending functions. Roman showcased how developers can encapsulate callback-based APIs, such as those for network requests or database queries, into coroutines, transforming convoluted code into clear, sequential logic. The open-source Kotlinx Coroutines library, actively developed on GitHub, provides these tools, with experimental status indicating ongoing refinement. Roman emphasized backward compatibility, ensuring that even experimental features remain production-ready, encouraging developers to adopt coroutines with confidence.

Applications and Scalability

The practical applications of coroutines, as demonstrated by Roman, span mobile, server-side, and real-time systems. In mobile applications, coroutines simplify UI updates and background tasks, ensuring responsive interfaces without blocking the main thread. On the server side, coroutines enable handling thousands of concurrent connections, critical for microservices and high-throughput systems like trading platforms. Roman’s live examples illustrated how coroutines manage multiple tasks—such as animations or user sessions—efficiently, leveraging lightweight state management to scale beyond traditional threading models.

The scalability of coroutines stems from their thread-agnostic design. Unlike threads, which consume significant resources, coroutines operate within a single thread, resuming execution as needed. Roman explained that garbage collection handles coroutine state naturally, maintaining references to suspended computations without additional overhead. This approach makes coroutines ideal for high-concurrency scenarios, where traditional threads would lead to performance bottlenecks. The ability to integrate with Java libraries further enhances their applicability, allowing developers to modernize legacy systems without extensive refactoring.

Implications for Software Development

Kotlin coroutines represent a paradigm shift in asynchronous programming, offering a balance of simplicity and power. By eliminating callback complexity, they enhance code readability, reducing maintenance costs and improving developer productivity. Roman’s emphasis on production-readiness and backward compatibility reassures enterprises adopting Kotlin for critical systems. The experimental status of coroutines, coupled with JetBrains’ commitment to incorporating community feedback, fosters a collaborative development model, ensuring that coroutines evolve to meet real-world needs.

The implications extend beyond individual projects to the broader software ecosystem. Coroutines enable developers to build scalable, responsive applications, from mobile apps to high-performance servers, without sacrificing code clarity. Their integration with Java libraries bridges the gap between legacy and modern systems, making Kotlin a versatile choice for diverse use cases. Roman’s invitation for community contributions via GitHub underscores the potential for coroutines to shape the future of asynchronous programming, influencing both Kotlin’s development and the JVM ecosystem at large.

Conclusion

Roman Elizarov’s introduction to Kotlin coroutines at KotlinConf 2017 provided a compelling vision for asynchronous programming. By combining suspending functions, coroutine builders, and seamless Java interoperability, coroutines address the challenges of modern concurrency with elegance and efficiency. The methodology’s focus on simplicity and scalability empowers developers to create robust, high-performance applications. As Kotlin continues to evolve, coroutines remain a cornerstone of its innovation, offering a transformative approach to handling concurrent tasks and reinforcing Kotlin’s position as a leading programming language.

Links

PostHeaderIcon [DevoxxUS2017] New Computer Architectures: Explore Quantum Computers & SyNAPSE Neuromorphic Chips by Peter Waggett

At DevoxxUS2017, Dr. Peter Waggett, Director of IBM’s Emerging Technology group at the Hursley Laboratory, delivered a thought-provoking session on next-generation computer architectures, focusing on quantum computers and IBM’s TrueNorth neuromorphic chip. With a background in radio astronomy and extensive research in cognitive computing, Peter explored how these technologies address the growing demand for processing power in a smarter, interconnected world. This post delves into the core themes of Peter’s presentation, highlighting the potential of these innovative architectures.

Quantum Computing: A New Frontier

Peter Waggett introduced quantum computing, explaining its potential to solve complex problems beyond the reach of classical systems. He described how quantum computers manipulate atomic spins using MRI-like systems, leveraging quantum entanglement and superposition. Drawing from his work at IBM, Peter highlighted ongoing research to make quantum computing accessible, emphasizing its role in advancing fields like cryptography and material science, despite challenges like helium shortages impacting hardware.

TrueNorth: Brain-Inspired Computing

Delving into neuromorphic computing, Peter showcased IBM’s TrueNorth chip, a brain-inspired architecture with 1 million neurons and 256 synapses, consuming just 73mW. Unlike traditional processors, TrueNorth challenges conventions like exact data representation and synchronicity, enabling low-power sensory perception for IoT and mobile applications. Peter’s examples illustrated TrueNorth’s scalability, positioning it as a cornerstone of IBM’s cognitive hardware ecosystem for transformative applications.

Addressing Scalability and Efficiency

Peter discussed the scalability of new architectures, comparing TrueNorth’s energy efficiency to traditional compute fabrics. He highlighted how neuromorphic chips optimize for error tolerance and energy-frequency trade-offs, ideal for IoT’s sensory demands. His insights, grounded in IBM’s client-focused projects, underscored the need for innovative designs to meet the computational needs of a connected planet, from smart cities to autonomous devices.

Building a Developer Community

Concluding, Peter emphasized the importance of fostering a developer community to advance these technologies. He encouraged collaboration through IBM’s research initiatives, noting the need for skilled engineers to tackle challenges like helium scarcity and system design. Peter’s vision for accessible platforms, inspired by his radio astronomy background, invited developers to explore quantum and neuromorphic computing, driving innovation in cognitive systems.

Links:

PostHeaderIcon [DevoxxUS2017] Creating a Connected Home by Kevin and Andy Nilson

At DevoxxUS2017, Kevin Nilson, a Java Champion and lead of the Chromecast Technical Solutions Engineer team at Google, joined forces with his 12-year-old son, Andy Nilson, to present a captivating live coding demo on building a connected home. Their session showcased how voice and mobile controls can interact with smart devices, leveraging platforms like Google Home. Kevin and Andy’s collaborative approach highlighted the accessibility of IoT development, blending technical expertise with educational outreach. This post examines the key themes of their presentation, emphasizing the fusion of innovation and learning.

Building a Smart Home Ecosystem

Kevin Nilson and Andy Nilson began by demonstrating a connected home setup, where lights, fans, and music systems respond to voice commands via Google Home. Kevin explained the architecture, integrating devices like Philips Hue and Nest thermostats through APIs. Andy, showcasing his coding skills, contributed to the demo by writing scripts to control devices, illustrating how accessible IoT programming can be, even for young developers. Their work reflected Google’s commitment to seamless smart home integration.

Voice Control and Device Integration

The duo delved into voice-activated controls, showing how Google Home processes commands like “turn on the lights.” Kevin highlighted the use of OAuth for secure device linking, ensuring commands are tied to user accounts. Andy demonstrated triggering actions, such as activating a fan, by coding simple integrations. Their live demo, despite network challenges, showcased practical IoT applications, emphasizing ease of use and real-time interaction with smart devices.

Inspiring the Next Generation

Kevin and Andy emphasized the educational potential of their project, drawing from their involvement in Devoxx4Kids and JavaOne Kids Day. Andy’s participation, rooted in his experience coding since childhood, inspired attendees to engage young learners in technology. Kevin shared resources for learning IoT, recommending starting with specific problems and exploring community solutions, such as hackathon projects like the Febreze air freshener integration, to spark creativity.

Fostering Community and Collaboration

Concluding, Kevin encouraged developers to explore IoT through open-source communities and hackathons, sharing his experience as a Silicon Valley JUG leader. Andy’s enthusiasm for coding underscored the session’s goal of making technology accessible. Their call to action invited attendees to contribute to smart home projects, leveraging platforms like Google Home to build innovative, user-friendly solutions for connected living.

Links:

PostHeaderIcon [DevoxxFR 2017] Why Your Company Should Store All Its Code in a Single Repo

The strategic decision regarding how an organization structures and manages its source code repositories is far more than a mere technical implementation detail; it is a fundamental architectural choice with profound and wide-ranging implications for development workflow efficiency, team collaboration dynamics, the ease of code sharing and reuse, and the effectiveness of the entire software delivery pipeline, including crucial aspects like Continuous Integration and deployment. The prevailing trend in recent years, particularly amplified by the widespread adoption of microservices architectures and the facilitation of distributed teams, has often leaned towards organizing code into numerous independent repositories (commonly known as the multi-repo approach). In this model, it is typical to have one repository per application, per service, or even per library. However, as Thierry Abaléa highlighted in his concise yet highly insightful and provocative talk at DevoxxFR 2017, some of the most innovative, productive, and successful technology companies in the world, including industry giants like Google, Facebook, and Twitter, operate and maintain their vast and complex codebases within a single, unified repository – a practice referred to as using a monorepo. This striking divergence in practice between the common industry trend and the approach taken by these leading technology companies prompted the central and compelling question of his presentation: what significant advantages, perhaps not immediately obvious, drive these large, successful organizations to embrace and actively maintain a monorepo strategy despite its perceived complexities and challenges, and are these benefits transferable and applicable to other organizations, regardless of their size, industry, or current architectural choices?

Thierry began by acknowledging the intuitive appeal and the perceived simplicity of the multi-repo model, where the organization of source code often appears to naturally mirror the organizational structure of teams or the architectural decomposition of applications into independent services. He conceded that for very small projects or nascent organizations, the multi-repo approach can seem straightforward. However, he sharply contrasted this with the monorepo approach favored by the aforementioned tech giants. He argued that while creating numerous small, independent repositories might seem simpler initially, this perceived simplicity rapidly erodes and can introduce significant friction, overhead, and complexity as the number of services, applications, libraries, and development teams grows within an organization. Managing dependencies between dozens, hundreds, or even thousands of independent repositories, coordinating changes that span across service boundaries, and ensuring consistent tooling, build processes, and deployment pipelines across a highly fragmented codebase become increasingly challenging, time-consuming, and error-prone in a large-scale multi-repo environment.

Unpacking the Compelling and Often Underestimated Advantages of the Monorepo

Thierry articulated several compelling and often underestimated benefits associated with adopting and effectively managing a monorepo strategy. A primary and perhaps the most impactful advantage is the unparalleled ease and efficiency of code sharing and reuse across different projects, applications, and teams within the organization. With all code residing in a single, unified place, developers can readily discover, access, and incorporate libraries, components, or utility functions developed by other teams elsewhere within the company without the friction of adding external dependencies or navigating multiple repositories. This inherent discoverability and accessibility fosters consistency in tooling and practices, reduces redundant effort spent on reinventing common functionalities, and actively promotes the creation and adoption of a shared internal ecosystem of high-quality, reusable code assets.

Furthermore, a monorepo can significantly enhance cross-team collaboration and dramatically facilitate large-scale refactorings and code modifications that span multiple components or services. Changes that affect several different parts of the system residing within the same repository can often be made atomically in a single commit, simplifying the process of coordinating complex updates across different parts of the system and fundamentally reducing the challenges associated with managing version compatibility issues and dependency hell that often plague multi-repo setups. Thierry also highlighted the simplification of dependency and version management; in a monorepo, there is typically a single, unified version of the entire codebase at any given point in time, eliminating the complexities and potential inconsistencies of tracking and synchronizing versions across numerous independent repositories. This unified view simplifies dependency upgrades and helps prevent conflicts arising from incompatible library versions. Finally, he argued that a monorepo inherently facilitates the implementation of a more effective and comprehensive cross-application Continuous Integration (CI) pipeline. Changes committed to the monorepo can easily trigger automated builds and tests for all affected downstream components, applications, and services within the same repository, enabling comprehensive testing of interactions and integrations between different parts of the system before changes are merged into the main development line. This leads to higher confidence in the overall stability and correctness of the entire system.

Addressing Practical Considerations, Challenges, and Potential Drawbacks

While making a strong and persuasive case for the advantages of a monorepo, Thierry also presented a balanced and realistic view by addressing the practical considerations, significant challenges, and potential drawbacks associated with this approach. He acknowledged that managing and scaling the underlying tooling (such as version control systems like Git or Mercurial, build systems like Bazel or Pants, and Continuous Integration infrastructure) to handle a massive monorepo containing millions of lines of code and potentially thousands of developers requires significant investment in infrastructure, tooling development, and specialized expertise. Companies like Google, Facebook, and Microsoft have had to develop highly sophisticated custom solutions or heavily adapt and extend existing open-source tools to manage their enormous repositories efficiently and maintain performance. Thierry noted that contributions from these leading companies back to open-source projects like Git and Mercurial are gradually making monorepo tooling more accessible and performant for other organizations.

He also pointed out that successfully adopting, implementing, and leveraging a monorepo effectively necessitates a strong and mature engineering culture characterized by high levels of transparency, trust, communication, and effective collaboration across different teams and organizational boundaries. If teams operate in silos with poor communication channels and a lack of awareness of work happening elsewhere in the codebase, a monorepo can potentially exacerbate issues related to unintentional breaking changes or conflicting work rather than helping to solve them. Thierry suggested that a full, immediate, “big bang” switch to a monorepo might not be feasible, practical, or advisable for all organizations. A phased or incremental approach, perhaps starting with new projects, consolidating code within a specific department or domain, or gradually migrating related services into a monorepo, could be a more manageable and lower-risk way to transition and build the necessary tooling, processes, and cultural practices over time. The talk provided a nuanced and well-rounded perspective, encouraging organizations to carefully consider the significant potential benefits of a monorepo for improving collaboration, code sharing, and CI efficiency, while being acutely mindful of the required investment in tooling, infrastructure, and, critically, the importance of fostering a collaborative and transparent engineering culture.

Hashtags: #Monorepo #CodeOrganization #EngineeringPractices #ThierryAbalea #SoftwareArchitecture #VersionControl #ContinuousIntegration #Collaboration #Google #Facebook #Twitter #DeveloperProductivity

PostHeaderIcon [DevoxxUS2017] Next Level Spring Boot Tooling by Martin Lippert

At DevoxxUS2017, Martin Lippert, a pivotal figure at Pivotal and co-lead of the Spring Tool Suite, delivered an engaging presentation on advanced tooling for Spring Boot development within the Eclipse IDE. With a rich background in crafting developer tools, Martin showcased how recent updates to Spring IDE and Spring Tool Suite streamline microservice development, particularly for Spring Boot and Cloud Foundry. His live demos and coding sessions highlighted features that enhance productivity and transform the IDE into a hub for cloud-native development. This post explores the key themes of Martin’s presentation, offering insights into optimizing Spring Boot workflows.

Streamlining Spring Boot Development

Martin Lippert opened by demonstrating the ease of initiating Spring Boot projects within Eclipse, leveraging the Spring Tool Suite. He showcased how developers can quickly scaffold applications using Spring Initializr integration, simplifying setup for microservices. Martin’s live demo illustrated generating a project with minimal configuration, emphasizing how these tools reduce boilerplate code and accelerate development cycles, aligning with Pivotal’s mission to empower developers with efficient workflows.

Advanced Configuration Management

Delving into configuration, Martin highlighted enhanced support for Spring Boot properties in YAML and property files. Features like content-assist, validation, and hover help simplify managing complex configurations, crucial for microservices. He demonstrated real-time synchronization between local projects and Cloud Foundry manifests, showcasing how the Spring Boot dashboard detects and merges configuration changes. These capabilities, Martin noted, ensure consistency across development and deployment environments, enhancing reliability in cloud-native applications.

Spring Boot Dashboard and Cloud Integration

A centerpiece of Martin’s talk was the Spring Boot dashboard, a powerful tool for managing multiple microservice projects. He showcased its ability to monitor, start, and stop services within the IDE, streamlining workflows for developers handling distributed systems. Martin also explored advanced editing of Cloud Foundry manifest files, illustrating seamless integration with cloud runtimes. His insights, drawn from Pivotal’s expertise, underscored the dashboard’s role in transforming Eclipse into a microservice development powerhouse.

Links:

PostHeaderIcon [ScalaDaysNewYork2016] Perfect Scalability: Architecting Limitless Systems

Michael Nash, co-author of Applied Akka Patterns, delivered an insightful exploration of scalability at Scala Days New York 2016, distinguishing it from performance and outlining strategies to achieve near-linear scalability using the Lightbend ecosystem. Michael’s presentation delved into architectural principles, real-world patterns, and tools that enable systems to handle increasing loads without failure.

Scalability vs. Performance

Michael Nash clarified that scalability is the ability to handle greater loads without breaking, distinct from performance, which focuses on processing the same load faster. Using a simple graph, Michael illustrated how performance improvements shift response times downward, while scalability extends the system’s capacity to handle more requests. He cautioned that poorly designed systems hit scalability limits, leading to errors or degraded performance, emphasizing the need for architectures that avoid these bottlenecks.

Avoiding Scalability Pitfalls

Michael identified key enemies of scalability, such as shared databases, synchronous communication, and sequential IDs. He advocated for denormalized, isolated data stores per microservice, using event sourcing and CQRS to decouple systems. For instance, an inventory service can update based on events from a customer service without direct database access, enhancing scalability. Michael also warned against overusing Akka cluster sharding, which introduces overhead, recommending it only when consistency is critical.

Leveraging the Lightbend Ecosystem

The Lightbend ecosystem, including Scala, Akka, and Spark, provides robust tools for scalability, Michael explained. Akka’s actor model supports asynchronous messaging, ideal for distributed systems, while Spark handles large-scale data processing. Tools like Docker, Mesos, and Lightbend’s ConductR streamline deployment and orchestration, enabling rolling upgrades without downtime. Michael emphasized integrating these tools with continuous delivery and deep monitoring to maintain system health under high loads.

Real-World Applications and DevOps

Michael shared case studies from IoT wearables to high-finance systems, highlighting common patterns like event-driven architectures and microservices. He stressed the importance of DevOps in scalable systems, advocating for automated deployment pipelines and monitoring to detect issues early. By embracing failure as inevitable and designing for resilience, systems can scale across data centers, as seen in continent-spanning applications. Michael’s practical advice included starting deployment planning early to avoid scalability bottlenecks.

Links:

PostHeaderIcon [KotlinConf2017] Opening Keynote

Lecturer

The Opening Keynote of KotlinConf 2017 features Maxim Shafirov, Andrey Breslav, Dmitry Jemerov, and Stephanie Cuthbertson. Maxim Shafirov, CEO of JetBrains, has led the company’s efforts in developing innovative developer tools, including IntelliJ IDEA and Kotlin. Andrey Breslav, Kotlin’s lead designer, brings a deep understanding of language design, focusing on pragmatic solutions for JVM developers. Dmitry Jemerov, a senior developer at JetBrains, contributes technical expertise to Kotlin’s development. Stephanie Cuthbertson, involved with Android’s adoption of Kotlin, offers insights into its mobile ecosystem impact. Their collective leadership has driven Kotlin’s growth as a modern programming language.

Abstract

The Opening Keynote of KotlinConf 2017, delivered in San Francisco from November 1–3, 2017, set the tone for the inaugural Kotlin conference. This article examines the keynote’s exploration of Kotlin’s rapid rise, its strategic vision, and its impact on the developer community. Led by JetBrains’ leadership, the keynote highlighted Kotlin’s adoption in Android, its multiplatform ambitions, and the collaborative efforts driving its ecosystem. The analysis delves into the context of Kotlin’s emergence, the technical and community-driven advancements presented, and the implications for its future in software development.

Context of Kotlin’s Emergence

KotlinConf 2017 marked a significant milestone as the first conference dedicated to Kotlin, a language developed by JetBrains to enhance Java’s capabilities while ensuring seamless JVM interoperability. Held in San Francisco, the event attracted 1,200 attendees and sold out, reflecting Kotlin’s growing popularity. The keynote, led by Maxim Shafirov, emphasized the language’s recent endorsement by Google as a first-class language for Android development, a pivotal moment that accelerated its adoption. With 150 talk submissions from 110 speakers, the conference required an additional track, underscoring the community’s enthusiasm and the language’s broad appeal.

The keynote contextualized Kotlin’s rise within the evolving landscape of software development, where developers sought modern, concise languages to address Java’s verbosity and complexity. Maxim and Andrey highlighted Kotlin’s design philosophy, focusing on readability, type safety, and ease of adoption. The event’s organization, supported by Trifork and a program committee, ensured a diverse range of topics, from Android development to server-side applications, reflecting Kotlin’s versatility and the community’s collaborative spirit.

Technical Advancements and Multiplatform Vision

Andrey Breslav’s segment of the keynote outlined Kotlin’s technical strengths and future directions, particularly its multiplatform capabilities. Kotlin’s ability to simplify functional programming and reduce boilerplate code was a key focus, with the compiler handling complex type inference to enhance developer productivity. The keynote introduced plans for common native libraries, enabling shared code for I/O, networking, and serialization across platforms like iOS and Android. This multiplatform vision aimed to unify development workflows, reducing fragmentation and enabling developers to write platform-agnostic code.

The keynote also addressed Kotlin’s experimental features, such as coroutines, which were in active development. Andrey emphasized backward compatibility, ensuring that even experimental features would remain stable in production environments. This commitment to reliability, coupled with tools to facilitate migration to finalized designs, reassured developers of Kotlin’s suitability for enterprise applications. The technical advancements presented underscored Kotlin’s potential to bridge diverse development ecosystems, from mobile to native platforms.

Community Engagement and Ecosystem Growth

The keynote highlighted the pivotal role of the Kotlin community in driving the language’s success. Maxim and Dmitry acknowledged the contributions of partners like Gradle and Spring, which enhanced Kotlin’s interoperability with existing tools. The conference provided platforms for engagement, including office hours for bug reporting and voting mechanisms to gather feedback. These initiatives empowered developers to influence Kotlin’s evolution, fostering a sense of ownership within the community.

The keynote also celebrated the social aspects of KotlinConf, with events like the keynote party featuring live music and networking opportunities. These gatherings strengthened community ties, encouraging collaboration among developers, startups, and Fortune 500 companies adopting Kotlin. The emphasis on community-driven growth highlighted Kotlin’s role as a collaborative project, with JetBrains actively seeking feedback to refine features and address pain points, ensuring the language’s relevance and adaptability.

Implications for Software Development

KotlinConf 2017’s keynote underscored Kotlin’s transformative potential in software development. Its adoption by 17% of Android projects at the time signaled its growing influence in mobile development, where it simplified tasks like UI design and asynchronous programming. The multiplatform vision promised to extend these benefits to iOS and other platforms, reducing development complexity and fostering code reuse. For enterprises, Kotlin’s production-readiness and support for high-quality codebases offered a compelling alternative to Java.

The keynote’s focus on community engagement set a precedent for inclusive development, encouraging contributions from diverse stakeholders. The promise of recorded sessions ensured global accessibility, amplifying the conference’s impact. For the industry, KotlinConf 2017 highlighted the shift toward modern languages that prioritize developer experience, positioning Kotlin as a leader in this transition. The keynote’s strategic vision laid the groundwork for Kotlin’s continued growth, influencing both individual developers and large-scale projects.

Conclusion

The Opening Keynote of KotlinConf 2017 encapsulated the excitement and ambition surrounding Kotlin’s rise as a modern programming language. By highlighting its technical strengths, multiplatform potential, and vibrant community, Maxim, Andrey, Dmitry, and Stephanie positioned Kotlin as a transformative force in software development. The keynote’s emphasis on collaboration, innovation, and developer empowerment underscored Kotlin’s role in shaping the future of programming. As JetBrains continues to evolve Kotlin, the insights from KotlinConf 2017 remain a cornerstone of its journey, inspiring developers to embrace its capabilities.

Links

PostHeaderIcon [DotSecurity2017] Counter-spells and the Art of Keeping Your Application Safe

In the arcane atelier of application assurance, where user whims whirl into wicked whimsy, wielding wards against web’s wicked whims demands diligence and dexterity. Ingrid Epure, a frontend alchemist at Intercom, invoked this incantation at dotSecurity 2017, transmuting tales of Ember’s exigencies into elixirs for Ember’s endurance. A Romanian expatriate ensconced in Dublin’s digital demesne, Ingrid’s immersion—four-year Ember opus, Rails’ rearward rampart—yields yarns of 55 scribes scripting 2,000 shifts, 100 deploys diurnal.

Ingrid’s invocation opened with Intercom’s incantus: real-time runes for messaging’s mosaic, 250 commits cascading 30K additions—vulnerabilities’ vortex in velocity’s vortex. XSS’s xanthic xanthoma: inline sorcery (Ember’s {{}} incantations) inviting injection’s infestation—’s sorcery, CSP’s countercharm. Ingrid illuminated Ember’s ember: helpers’ hygiene (HTML-escapers’ aegis), bindings’ bulwark (triple braces’ taboo). Tools’ talisman: npm’s audit, ember-cli’s eldritch eyes—vulnerabilities’ vigil, dependencies’ divination.

CSRF’s chicanery: Ember’s CSRF tokens, Rails’ requiem—double-submit’s duality, synchronizer’s sentinel. Ingrid invoked interceptors: Ember’s data’s dominion, Rails’ requital. Content Security Policy’s codex: v2’s vigilance (nonces’ nebula, hashes’ heraldry), v3’s valor—scripts’ scrutiny, inline’s inquisition. Ingrid’s imprecation: Ember addon’s aegis, Node’s nexus—alerts’ alarum, anomalies’ augury.

This conjury: clean code’s creed, tools’ tome—CSP’s citadel, vulnerabilities vanquished.

Vulnerabilities’ Vortex and Wards’ Weave

Ingrid invoked Intercom’s incantus: Ember’s exigencies, XSS’s xanthoma—helpers’ hygiene, bindings’ bulwark.

CSRF’s Chicanery and CSP’s Codex

Tokens’ talisman, interceptors’ insight—v2’s vigilance, v3’s valor. Ingrid’s imprecation: addon’s aegis, Node’s nexus.

Links:

EN_DotSecurity2017_006_009.md

PostHeaderIcon [DevoxxUS2017] Continuous Optimization of Microservices Using Machine Learning by Ramki Ramakrishna

At DevoxxUS2017, Ramki Ramakrishna, a Staff Engineer at Twitter, delivered a compelling session on optimizing microservices performance using machine learning. Collaborating with colleagues, Ramki shared insights from Twitter’s platform engineering efforts, focusing on Bayesian optimization to tune microservices in data centers. His talk addressed the challenges of managing complex workloads and offered a vision for automated optimization. This post explores the key themes of Ramki’s presentation, highlighting innovative approaches to performance tuning.

Challenges of Microservices Performance

Ramki Ramakrishna opened by outlining the difficulties of tuning microservices in data centers, where numerous parameters and workload variations create combinatorial complexity. Drawing from his work with Twitter’s JVM team, he explained how continuous software and hardware upgrades exacerbate performance issues, often leaving resources underutilized. Ramki’s insights set the stage for exploring machine learning as a solution to these challenges.

Bayesian Optimization in Action

Delving into technical details, Ramki introduced Bayesian optimization, a machine learning approach to automate performance tuning. He described its application in Twitter’s microservices, using tools derived from open-source projects like Spearmint. Ramki shared practical examples, demonstrating how Bayesian methods efficiently explore parameter spaces, outperforming manual tuning in scenarios with many variables, ensuring optimal resource utilization.

Lessons and Pitfalls

Ramki discussed pitfalls encountered during Twitter’s optimization projects, such as the need for expert-defined parameter ranges to guide machine learning algorithms. He highlighted the importance of collaboration between service owners and engineers to specify tuning constraints. His lessons, drawn from real-world implementations, emphasized balancing automation with human expertise to achieve reliable performance improvements.

Vision for Continuous Optimization

Concluding, Ramki outlined a vision for a continuous optimization service, integrating machine learning into DevOps pipelines. He noted plans to open-source parts of Twitter’s solution, building on frameworks like Spearmint. Ramki’s forward-thinking approach inspired developers to adopt data-driven optimization, ensuring microservices remain efficient amidst evolving data center demands.

Links:

PostHeaderIcon [DevoxxUS2017] Java Puzzlers NG S02: Down the Rabbit Hole by Baruch Sadogursky and Viktor Gamov

At DevoxxUS2017, Baruch Sadogursky and Viktor Gamov, from JFrog and Hazelcast respectively, entertained attendees with a lively exploration of Java 8 and 9 puzzlers. Known for their engaging style, Baruch, a Developer Advocate, and Viktor, a Senior Solution Architect, presented complex coding challenges involving streams, lambdas, and Optionals. Their session combined humor, technical depth, and audience interaction, offering valuable lessons for Java developers. This post examines the key themes of their presentation, highlighting strategies to navigate Java’s intricacies.

Decoding Java 8 Complexities

Baruch Sadogursky and Viktor Gamov kicked off with a series of Java 8 puzzlers, focusing on streams and lambdas. They presented scenarios where seemingly simple code led to unexpected outcomes, such as subtle bugs in stream operations. Baruch emphasized the importance of understanding functional programming nuances, using examples to illustrate common pitfalls. Their interactive approach, with audience participation, made complex concepts accessible and engaging.

Navigating Java 9 Features

Transitioning to Java 9, Viktor explored new puzzlers involving modules and CompletableFutures, highlighting how these features introduce fresh challenges. He demonstrated how the module system can lead to compilation errors if misconfigured, urging developers to read documentation carefully. Their examples, drawn from real-world experiences at JFrog and Hazelcast, underscored the need for precision in adopting Java’s evolving features.

Tools for Avoiding Pitfalls

Baruch and Viktor stressed the role of tools like IntelliJ IDEA in catching errors early, noting how its inspections highlight potential issues in lambda and stream usage. They advised against overusing complex constructs, advocating for simplicity to avoid “WTF” moments. Their practical tips, grounded in their extensive conference-speaking experience, encouraged developers to leverage IDEs and documentation to write robust code.

Community Engagement and Resources

Concluding with a call to action, Baruch and Viktor invited developers to contribute puzzlers to JFrog’s puzzlers initiative, fostering community-driven learning. They shared resources, including their slide deck and blog posts, encouraging feedback via Twitter. Their enthusiasm for Java’s challenges inspired attendees to dive deeper into the language’s intricacies, embracing both its power and pitfalls.

Links: