Recent Posts
Archives

Archive for the ‘en-US’ Category

PostHeaderIcon [KotlinConf2023] Java and Kotlin: A Mutual Evolution

At KotlinConf2024, John Pampuch, Google’s production languages lead, delivered a history lesson on Java and Kotlin’s intertwined journeys. Battling jet lag with humor, John traced nearly three decades of Java and twelve years of Kotlin, emphasizing their complementary strengths. From Java’s robust ecosystem to Kotlin’s pragmatic innovation, the languages have shaped each other, accelerating progress. John’s talk, rooted in his experience since Java’s 1996 debut, explored design goals, feature cross-pollination, and future implications, urging developers to leverage Kotlin’s developer-friendly features while appreciating Java’s stability.

Design Philosophies: Pragmatism Meets Robustness

John opened by contrasting the languages’ origins. Java, launched in 1995, aimed for simplicity, security, and portability, aligning tightly with the JVM and JDK. Its ecosystem, bolstered by libraries and tooling, set a standard for enterprise development. Kotlin, announced in 2011 by JetBrains, prioritized pragmatism: concise syntax, interoperability with Java, and multiplatform flexibility. Unlike Java’s JVM dependency, Kotlin targets iOS, web, and beyond, enabling faster feature rollouts. John noted Kotlin’s design avoids Java’s rigidity, embracing object-oriented principles with practical tweaks like semicolon-free lines. Yet Java’s self-consistency, seen in its holistic lambda integration, complements Kotlin’s adaptability, creating a synergy where both thrive.

Feature Evolution: From Lambdas to Coroutines

The talk highlighted key milestones. Java’s 2014 release of JDK 8 introduced lambdas, default methods, and type inference, transforming APIs to support functional programming. Kotlin, with 1.0 in 2016, brought smart casts, string templates, and named arguments, prioritizing developer ease. By 2018, Kotlin’s coroutines revolutionized JVM asynchronous programming, offering a simpler mental model than Java’s threads. John praised coroutines as a potential game-changer, though Java’s 2023 virtual threads and structured concurrency aim to close the gap. Kotlin’s multiplatform support, cemented by Google’s 2017 Android endorsement, outpaces Java’s JVM-centric approach, but Java’s predictable six-month release cycle since 2017 ensures steady progress. These advancements reflect a race where each language pushes the other forward.

Mutual Influences: Sealed Classes and Beyond

John emphasized cross-pollination. Java’s 2021 records, inspired by frameworks like Lombok, mirror Kotlin’s data classes, though Kotlin’s named parameters reduce boilerplate further. Sealed classes, introduced in Java 17 and Kotlin 1.5 around 2021, emerged concurrently, suggesting shared inspiration. Kotlin’s string templates, a staple since its early days, influenced Java’s 2024 preview of flexible string templates, which John hopes Kotlin might adopt for localization. Java’s exploration of nullability annotations, potentially aligning with Kotlin’s robust null safety, shows ongoing convergence. John speculated that community demand could push Java toward features like named arguments, though JVM changes remain a hurdle. This mutual learning, fueled by competition with languages like Go and Rust, drives excitement and innovation.

Looking Ahead: Pragmatism and Compatibility

John concluded with a call to action: embrace Kotlin’s compact, readable features while valuing Java’s compile-time speed and ecosystem. Kotlin’s faster feature delivery and multiplatform prowess contrast with Java’s backwards compatibility and predictability. Yet both share a commitment to pragmatic evolution, avoiding breaks in millions of applications. Questions from the audience probed Java’s nullability and virtual threads, with John optimistic about eventual alignment but cautious about timelines. His talk underscored that Java and Kotlin’s competition isn’t zero-sum—it’s a catalyst for better tools, ideas, and developer experiences, ensuring both languages remain vital.

Hashtags: #Java #Kotlin

PostHeaderIcon [DevoxxBE2023] REST Next Level: Crafting Domain-Driven Web APIs by Julien Topçu

At Devoxx Belgium 2023, Julien Topçu, a technical coach at Shadow, delivered a compelling session on elevating REST APIs by embedding domain-driven design principles. With a rich background in crafting software using Domain-Driven Design (DDD), Extreme Programming, and Kanban, Julien illuminated the pitfalls of traditional REST implementations and proposed a transformative approach to encapsulate business intent within APIs. His talk, centered around a fictional space travel booking system, demonstrated how to align APIs with user actions, preserve business workflows, and enhance consumer experience through hypermedia controls. Through a blend of theoretical insights and practical demonstrations, Julien showcased a methodology to create APIs that are not only functional but also semantically rich and workflow-driven.

The Pitfalls of Traditional REST APIs

Julien began by highlighting a pervasive issue in software architecture: the loss of business intent when translating domain logic into REST APIs. Typically, business logic resides in the backend to avoid duplication across consumers like web or mobile applications. However, REST’s uniform interface, with its limited vocabulary of CRUD operations (Create, Read, Update, Delete), often distorts this logic. For instance, in a train reservation system, a user’s intent to “search for trains” is reduced to “create a search resource,” stripping away domain-specific semantics like destinations or schedules. This mismatch, Julien argued, stems from REST’s standardized approach, formalized by Roy Fielding in his PhD thesis, which prioritizes simplicity over application-specific needs. As a result, APIs lose expressiveness, forcing consumers to reconstruct business workflows, leading to what Julien termed “accidental complexity of adaptation.”

To illustrate, Julien presented a scenario where a user performs a search for space trains from Earth to the Moon. The traditional REST API translates this into a POST request to create a search resource, devoid of domain context. This not only obscures the user’s intent but also couples consumers to the backend’s implementation, making changes—like switching from “bound” to “journey index” for multi-destination trips—disruptive. Julien’s live demo underscored this fragility: altering a request parameter broke the API, highlighting the risks of tight coupling between consumers and backend models.

Encapsulating Business Intent with Semantic Endpoints

To address these shortcomings, Julien proposed aligning REST endpoints with user actions rather than backend models. Instead of exposing implementation details, such as updating a sub-resource like “selection” within a search, APIs should reflect behaviors like “select a space train with a fare.” This approach involves using classifiers in URLs, such as POST /searches/{id}/spacetrains/{number}/fares/{code}/select, which clearly convey the intent of selecting a fare for a specific train. Julien emphasized that this does not violate REST principles, debunking the myth that verbs in URLs are forbidden. As long as verbs align with HTTP methods (e.g., POST for creating a resource), they enhance semantic clarity without breaking the uniform interface.

This shift decouples consumers from the backend’s internal structure. For example, changing the backend’s data model (e.g., using booleans instead of a selection object) no longer impacts consumers, as the API exposes behaviors rather than state. Julien’s demo further showcased this by demonstrating how a frontend could adapt to backend changes (e.g., from “bound” to “journey index”) without modification, thanks to semantic endpoints. This approach not only preserves business intent but also simplifies consumer logic, reducing the cognitive load of interpreting CRUD-based APIs.

Encapsulating Workflows with Hypermedia Controls

A critical challenge Julien addressed is the lack of workflow definition in traditional REST APIs. Typically, consumers must hardcode business workflows, such as the sequence of selecting outbound and inbound trains before booking. This leads to duplicated logic and potential errors, like displaying a booking button prematurely. Julien introduced hypermedia controls, specifically HATEOAS (Hypermedia As The Engine Of Application State), as a solution. By embedding links in API responses, the backend can guide consumers through the workflow dynamically.

In his demo, Julien showed how a search response includes links like select-outbound and all-inbounds, which guide the consumer to the next valid actions. For instance, after selecting an outbound train, the response provides a link to select an inbound train, ensuring only compatible options are available. This encapsulation of workflow logic in the backend eliminates the need for consumers to understand the sequence of actions, reducing errors and enhancing maintainability. Julien highlighted that this approach, part of the Richardson Maturity Model’s Level 3, makes APIs discoverable and resilient to backend changes, as consumers rely on links rather than hardcoded URLs.

Practical Implementation and Limitations

Julien’s live coding demo brought these concepts to life, showcasing a Spring Boot backend in Kotlin that dynamically generates links based on the application state. For example, the create-booking link only appears when the selection is complete, ensuring consumers cannot book prematurely. This dynamic guidance, facilitated by Spring HATEOAS, allows the frontend to display UI elements like the booking button based solely on available links, streamlining development and enhancing user experience.

However, Julien acknowledged limitations. For complex forms requiring extensive user input, the hypermedia approach may need supplementation with predefined payloads, as consumers must know what data to send. Additionally, long URLs, while not a practical issue in Julien’s experience at Expedia, could pose challenges in some contexts. Despite these constraints, the approach excels in domains with well-defined workflows, offering a robust framework for building expressive, maintainable APIs.

Conclusion: A New Paradigm for REST APIs

Julien’s session at Devoxx Belgium 2023 offered a transformative vision for REST APIs, emphasizing the power of domain-driven design and hypermedia controls. By aligning endpoints with user actions, encapsulating behaviors, and guiding workflows through links, developers can create APIs that are both semantically rich and resilient to change. This approach not only enhances consumer experience but also aligns with the principles of DDD, ensuring that business intent remains at the forefront of API design. Julien’s practical insights and engaging demo left attendees inspired to rethink their API strategies, fostering a deeper appreciation for REST’s potential when infused with domain-driven principles.

Links:

PostHeaderIcon [DevoxxBE2023] The Panama Dojo: Black Belt Programming with Java 21 and the FFM API by Per Minborg

In an engaging session at Devoxx Belgium 2023, Per Minborg, a Java Core Library team member at Oracle and an OpenJDK contributor, guided attendees through the intricacies of the Foreign Function and Memory (FFM) API, a pivotal component of Project Panama. With a blend of theoretical insights and live coding, Per demonstrated how this API, in its third preview in Java 21, enables seamless interaction with native memory and functions using pure Java code. His talk, dubbed the “Panama Dojo,” showcased the API’s potential to enhance performance and safety, culminating in a hands-on demo of a lightweight microservice framework built with memory segments, arenas, and memory layouts.

Unveiling the FFM API’s Capabilities

Per introduced the FFM API as a solution to the limitations of Java Native Interface (JNI) and direct buffers. Unlike JNI, which requires cumbersome C stubs and inefficient data passing, the FFM API allows direct native memory access and function calls. Per illustrated this with a Point struct example, where a memory segment models a contiguous memory region with 64-bit addressing, supporting both heap and native segments. This eliminates the 2GB limit of direct buffers, offering greater flexibility and efficiency.

The API introduces memory segments with constraints like size, lifetime, and thread confinement, preventing out-of-bounds access and use-after-free errors. Per highlighted the importance of deterministic deallocation, contrasting Java’s automatic memory management with C’s manual approach. The FFM API’s arenas, such as confined and shared arenas, manage segment lifecycles, ensuring resources are freed explicitly, as demonstrated in a try-with-resources block that deterministically deallocates a segment.

Structuring Memory with Layouts and Arenas

Memory layouts, a key FFM API feature, provide a declarative way to define memory structures, reducing manual offset computations. Per showed how a Point layout with x and y doubles uses var handles to access fields safely, leveraging JIT optimizations for atomic operations. This approach minimizes bugs in complex structs, as var handles inherently account for offsets, unlike manual calculations.

Arenas further enhance safety by grouping segments with shared lifetimes. Per demonstrated a confined arena, restricting access to a single thread, and a shared arena, allowing multi-threaded access with thread-local handshakes for safe closure. These constructs bridge the gap between C’s flexibility and Rust’s safety, offering a balanced model for Java developers. In his live demo, Per used an arena to allocate a MarketInfo segment, showcasing deterministic deallocation and thread safety.

Building a Persistent Queue with Memory Mapping

The heart of Per’s session was a live coding demo constructing a persistent queue using memory mapping and atomic operations. He defined a MarketInfo record for stock exchange data, including timestamp, symbol, and price fields. Using a record mapper, Per serialized and deserialized records to and from memory segments, demonstrating immutability and thread safety. The mapper, a potential future JDK feature, simplifies data transfer between Java objects and native memory.

Per then implemented a memory-mapped queue, where a file-backed segment stores headers and payloads. Headers use atomic operations to manage mutual exclusion across threads and JVMs, ensuring safe concurrent access. In the demo, a producer appended MarketInfo records to the queue, while two consumers read them asynchronously, showcasing low-latency, high-performance data sharing. Per’s use of sparse files allowed a 1MB queue to scale virtually, highlighting the API’s efficiency.

Crafting a Microservice Framework

The session culminated in assembling these components into a microservice framework. Per’s queue, inspired by Chronicle Queue, supports persistent, high-performance data exchange across JVMs. The framework leverages memory mapping for durability, atomic operations for concurrency, and record mappers for clean data modeling. Per demonstrated its practical application by persisting a queue to a file and reading it in a separate JVM, underscoring its robustness for distributed systems.

He emphasized the reusability of these patterns across domains like machine learning and graphics processing, where native libraries are prevalent. Tools like jextract, briefly mentioned, further unlock native libraries like TensorFlow, enabling Java developers to integrate them effortlessly. Per’s framework, though minimal, illustrates how the FFM API can transform Java’s interaction with native code, offering a safer, faster alternative to JNI.

Performance and Safety in Harmony

Throughout, Per stressed the FFM API’s dual focus on performance and safety. Native function calls, faster than JNI, and memory segments with strict constraints outperform direct buffers while preventing common errors. The API’s integration with existing JDK features, like var handles, ensures compatibility and optimization. Per’s live coding, despite its complexity, flowed seamlessly, reinforcing the API’s practicality for real-world applications.

Conclusion: Embracing the Panama Dojo

Per’s session was a masterclass in leveraging the FFM API to push Java’s boundaries. By combining memory segments, layouts, arenas, and atomic operations, he crafted a framework that exemplifies the API’s potential. His call to action—experiment with the FFM API in Java 21—invites developers to explore this transformative tool, promising enhanced performance and safety for native interactions. The Panama Dojo left attendees inspired to break new ground in Java development.

Links:

PostHeaderIcon [DevoxxBE2023] Java Language Update by Brian Goetz

At Devoxx Belgium 2023, Brian Goetz, Oracle’s Java Language Architect, delivered an insightful session on the evolution of Java, weaving together a narrative of recent advancements, current features in preview, and a vision for the language’s future. With his deep expertise, Brian illuminated how Java balances innovation with compatibility, ensuring it remains a cornerstone of modern software development. His talk explored the introduction of records, sealed classes, pattern matching, and emerging features like string templates and simplified program structures, all designed to enhance Java’s expressiveness and accessibility. Through a blend of technical depth and practical examples, Brian showcased Java’s commitment to readable, maintainable code while addressing contemporary programming challenges.

Reflecting on Java’s Recent Evolution

Brian began by recapping Java’s significant strides since his last Devoxx appearance, highlighting features like records, sealed classes, and pattern matching. Records, introduced as nominal tuples, provide a concise way to model data with named components, enhancing readability over structural tuples. For instance, a Point record with x and y coordinates is more intuitive than an anonymous tuple of integers. By deriving constructors, accessors, and equality methods from a state declaration, records eliminate boilerplate while making a clear semantic statement about data immutability. Brian emphasized that this semantic focus, rather than mere syntax reduction, distinguishes Java’s approach from alternatives like Lombok.

Sealed classes, another recent addition, allow developers to restrict class hierarchies, specifying permitted subtypes explicitly. This enables libraries to expose abstract types while controlling implementations, as seen in the JDK’s use of method handles. Sealed classes also enhance exhaustiveness checking in switch statements, reducing runtime errors by ensuring all cases are covered. Brian illustrated this with a Shape hierarchy, where a sealed interface permits only Circle and Rectangle, allowing the compiler to verify switch completeness without a default clause.

Advancing Data Modeling with Pattern Matching

Pattern matching, a cornerstone of Java’s recent enhancements, fuses type testing, casting, and binding into a single operation, reducing errors from manual casts. Brian demonstrated how type patterns, like if (obj instanceof String s), streamline code by eliminating redundant casts. Record patterns extend this by deconstructing objects into components, enabling recursive matching for nested structures. For example, a Circle record with a Point center can be matched to extract x and y coordinates in one expression, enhancing both concision and safety.

The revamped switch construct, now an expression supporting patterns and guards, further leverages these capabilities. Brian highlighted its exhaustiveness checking, which uses sealing information to ensure all cases are handled, as in a Color interface sealed to Red, Yellow, and Green. This eliminates the need for default clauses, catching errors at compile time if the hierarchy evolves. By combining records, sealed classes, and pattern matching, Java now supports algebraic data types, offering a powerful framework for modeling complex domains like expressions, where a sealed Expression type can be traversed elegantly with pattern-based recursion.

Introducing String Templates for Safe Aggregation

Looking to the future, Brian introduced string templates, a preview feature addressing the perils of string interpolation. Unlike traditional concatenation or formatting methods, string templates use a template processor to safely combine text fragments and expressions. A syntax like STR.FMT."Hello, \{name\}!" invokes a processor to validate inputs, preventing issues like SQL injection. Brian envisioned a SQL template processor that balances quotes and produces a result set directly, bypassing string intermediaries for efficiency and security. Similarly, a JSON processor could streamline API development by constructing objects from raw fragments, enhancing performance.

This approach reframes interpolation as a broader aggregation problem, allowing developers to define custom processors for domain-specific needs. Brian’s emphasis on safety and flexibility underscores Java’s commitment to robust APIs, drawing inspiration from JavaScript’s tagged functions and Scala’s string interpolators, but tailored to Java’s ecosystem.

Simplifying Java’s On-Ramp and Beyond

To make Java for new developers, Brian discussed preview features like unnamed classes and patterns, which reduce boilerplate for simple programs. A minimal program might omit public static void main, allowing beginners to focus on core logic rather than complex object-oriented constructs. This aligns Java with languages like Python, where incremental learning is prioritized, easing the educational burden on instructors and students alike.

Future enhancements include reconstruction patterns for immutable objects, enabling concise updates like p.with(x: 0) to derive new records from existing ones. Brian also proposed deconstructor patterns for regular classes, mirroring constructors to enable pattern decomposition, enhancing API symmetry. These features aim to make aggregation and decomposition reversible, reducing error-prone asymmetries in object manipulation. For instance, a Person class could declare a deconstructor to extract first and last names, mirroring its constructor, streamlining data handling across Java’s object model.

Conclusion: Java’s Balanced Path Forward

Brian’s session underscored Java’s deliberate evolution, balancing innovation with compatibility. By prioritizing readable, maintainable code, Java addresses modern challenges like loosely coupled services and untyped data, positioning itself as a versatile language for data modeling. Features like string templates and simplified program structures promise greater accessibility, while pattern matching and deconstruction patterns enhance expressiveness. As Java continues to refine its features, it remains a testament to thoughtful design, ensuring developers can build robust, future-ready applications.

Links:

PostHeaderIcon [AWS Summit Berlin 2023] Go-to-Market with Your Startup: Tips and Best Practices from VC Investors

At AWS Summit Berlin 2023, David Roldán, Head of Startup Business Development for EMEA at AWS, led a 48-minute panel, available on YouTube, featuring VC investors and operators: Constantine, CTO and co-founder of PlanRadar, Jasper, partner at Cherry Ventures, and Gloria, founder of Beyond Capital. This post, targeting startup founders, explores go-to-market (GTM) strategies for B2B SaaS, emphasizing product-segment fit, iterative processes, and avoiding premature scaling in a competitive landscape with over 100,000 independent software vendors.

Defining Product-Segment Fit

Jasper introduced the concept of product-segment fit, arguing it’s more precise than product-market fit for early-stage startups. He emphasized that founders should target a specific customer segment where the product resonates strongly, rather than chasing universal appeal. For example, PlanRadar, serving the construction industry, found success by focusing on old-fashioned outbound sales to reach decision-makers in a niche vertical. Gloria reinforced this, noting that chasing a single “killer feature” often distracts from solving core use cases. Instead, founders should iterate based on customer feedback, ensuring the product delivers immediate value to a well-defined audience, avoiding dilution of focus across disparate segments.

Iterative GTM Strategies

Constantine shared PlanRadar’s journey, highlighting the iterative nature of GTM. With a five-founder team spanning commercial, industry, and tech expertise, PlanRadar prioritized early customer feedback over polished features. He advised launching minimum viable products to test assumptions, even if imperfect, to refine offerings rapidly. Gloria added that data infrastructure, like a well-structured CRM, is critical before Series A to track sales cycles and conversion stages. However, Jasper cautioned against over-rationalizing early GTM with tools like Salesforce, which can burden seed-stage startups. Instead, founders should stay hands-on, engaging directly with customers to build velocity in the sales pipeline.

Avoiding Premature Scaling

Gloria and Constantine stressed the dangers of premature scaling, particularly in hiring. Gloria advised against hiring product managers too early, recommending product engineers who can own the roadmap alongside founders until post-Series A. Constantine echoed this, noting PlanRadar delayed building a product management team until after Series A due to workload and complexity, hiring an ex-founder after a year-long search. Jasper highlighted that premature hires, like sales managers craving predictability, can push startups to scale in the wrong segment, leading to misaligned products. The panel agreed that founders must retain product vision, avoiding delegation to non-founders who lack the same long-term perspective.

Customer Success and Retention

Retention emerged as a key GTM metric, but its priority depends on stage. Gloria argued that early churn is acceptable to refine product-segment fit, but post-product-market fit, net retention becomes critical, reflecting customer love through renewals and upsells. Constantine detailed PlanRadar’s post-Series A customer success team, which segments customers (gold, silver, bronze) using usage data to allocate scarce resources effectively. He noted charging for onboarding, common in Europe, boosts engagement by signaling value. Gloria emphasized three pillars: activation (fast onboarding), engagement (tracking feature usage), and renewals (modularizing products for cross-selling), ensuring startups maximize lifetime value as they scale.

PostHeaderIcon [DevoxxBE 2023] Introducing Flow: The Worst Software Development Approach in History

In a satirical yet insightful closing keynote at Devoxx Belgium 2023, Sander Hoogendoorn and Kim van Wilgen, seasoned software development experts, introduced “Flow,” a fictional methodology designed to expose the absurdities of overly complex software development practices. With humor and sharp critique, Sander and Kim drew from decades of experience to lampoon methodologies like Waterfall, Scrum, SAFe, and Spotify, blending real-world anecdotes with exaggerated principles to highlight what not to do. Their talk, laced with wit, ultimately transitioned to earnest advice, advocating for simplicity, autonomy, and human-centric development. This presentation offers a mirror to the industry, urging developers to critically evaluate methodologies and prioritize effective, enjoyable work.

The Misadventure of Methodologies

Sander kicked off with a historical detour, debunking the myth of Waterfall’s rigidity. Citing Winston Royce’s 1970 paper, he revealed that Waterfall was meant to be iterative, allowing developers to revisit phases—a concept ignored for decades, costing billions. This set the stage for Flow, a methodology born from a tongue-in-cheek desire to maximize project duration for consultancy profits. Kim explained how they cherry-picked the worst elements from existing frameworks: endless sprints from Scrum, gamification to curb autonomy, and an alphabet soup of roles from SAFe.

Their critique was grounded in real-world failures. Sander shared a Belgian project where misestimated sprints and 300 outsourced developers led to chaos, exacerbated by documentation in Dutch and French. Kim highlighted how methodologies like SAFe balloon roles, sidelining customers and adding complexity. By naming Flow with trendy buzzwords—Kaizen, continuous disappointment, and pointless—they mocked the industry’s obsession with jargon over substance.

The Flow Framework: A Recipe for Dysfunction

Flow’s principles, as Sander and Kim outlined, are deliberately counterproductive. Sprints, renamed “mini-Waterfalls,” ensure repeated failures, with burn charts (not burn-down charts) showing growing work without progress. Meetings, dubbed “Flow meetings,” are scheduled to disrupt developers’ focus, with random topics and high-placed interruptions—like a 2.5-meter-tall CEO bursting in. Kim emphasized gamification, stripping teams of real autonomy while offering trivial perks like workspace decoration, exemplified by a ball pit job interview at a Dutch e-commerce firm.

The Flow Manifesto, a parody of the Agile Manifesto, prioritizes “extensive certification over hands-on experience” and “meetings over focus.” Sander recounted a project in France with a 20-column board so confusing that even AI couldn’t decipher its French Post-its. Jira, mandatory in Flow, becomes a tool for obfuscation, with requirements buried in lengthy tickets. Open floor plans and Slack further stifle communication, with “pair slacking” replacing collaboration, ensuring developers remain distracted and disconnected.

Enterprise Flow: Scaling the Absurdity

In large organizations, Flow escalates into the Big Flow Framework (BFF), starting at version 3.0 to sound innovative. Kim critiqued the blind adoption of Spotify’s model, designed for 8x annual growth, which saddles banks with excessive managers—sometimes a 1:1 ratio with developers. Sander recounted a client renaming managers as “tech leads,” adding 118 unnecessary roles to a release train. Certifications, costing €10,000 per recertification, parody the industry’s profit-driven training schemes.

Flow’s tooling, like boards with incomprehensible columns and Jira’s dual Scrum-Kanban confusion, ensures clients remain baffled. Kim highlighted how Enterprise Flow thrives on copying trendy startups like Basecamp, debating irrelevant issues like banning TypeScript or leaving public clouds. Research, they noted, shows no methodology—including SAFe or LeSS—outperforms having none, underscoring Flow’s satirical point: complexity breeds failure.

A Serious Turn: Principles for Better Development

After the laughter, Sander and Kim pivoted to their true beliefs, advocating for a human-centric approach. Software, they stressed, is built by people, not tools or methodologies. Teams should evolve their own practices, using Scrum or Kanban as starting points but adapting to context. Face-to-face communication, trust, and psychological safety are paramount, as red sprints and silencing voices drive talent away.

Focus is sacred, requiring quiet spaces and flexible hours, as ideas often spark outside 9–5. Continuous learning, guarded by dedicating at least one day weekly, prevents stagnation. Autonomy, though initially uncomfortable, empowers teams to make decisions, as Sander’s experience with reluctant developers showed. Flat organizations with minimal hierarchy foster trust, while experienced developers, like those born in the ’60s and ’70s, mentor through code reviews rather than churning out code.

Conclusion: Simplicity and Joy in Development

Sander and Kim’s Flow is a cautionary tale, urging developers to reject bloated methodologies and embrace simplicity. By reducing complexity, as Albert Einstein suggested, teams can deliver value effectively. Above all, they reminded the audience to have fun, celebrating software development as the best industry to be in. Their talk, blending satire with wisdom, inspires developers to craft methodologies that empower people, foster collaboration, and make work enjoyable.

Hashtags: #SoftwareDevelopment #Agile #Flow #Methodologies #DevOps #SanderHoogendoorn #KimVanWilgen #SchubergPhilis #iBOOD #DevoxxBE2023

PostHeaderIcon [DevoxxBE 2023] The Great Divergence: Bridging the Gap Between Industry and University Java

At Devoxx Belgium 2023, Felipe Yanaga, a teaching assistant at the University of North Carolina at Chapel Hill and a Robertson Scholar, delivered a compelling presentation addressing the growing disconnect between the vibrant use of Java in industry and its outdated perception in academia. As a student with internships at Amazon and Google, and a fellow at UNC’s Computer Science Experience Lab, Felipe draws on his unique perspective to highlight how universities lag in teaching modern Java practices. His talk explores the reasons behind this divergence, the negative perceptions students hold about Java, and actionable steps to revitalize its presence in academic settings.

Java’s Strength in Industry

Felipe begins by emphasizing Java’s enduring relevance in the professional world. Far from the “Java is dead” narrative that periodically surfaces online, the language thrives in industry, powered by innovations like Quarkus, GraalVM, and a rapid six-month release cycle. Companies sponsoring Devoxx, such as Red Hat and Oracle, exemplify Java’s robust ecosystem, leveraging frameworks and tools that enhance developer productivity. For instance, Felipe references the keynote by Brian Goetz, which outlined Java’s roadmap, showcasing its adaptability to modern development needs by drawing inspiration from other languages. This continuous evolution ensures Java remains a cornerstone for enterprise applications, from microservices to large-scale systems.

However, Felipe points out a troubling trend: despite its industry strength, Java’s popularity is declining in metrics like GitHub’s language rankings and the TIOBE Index. While JavaScript and Python have surged, Java’s share of relevant Google searches has dropped from 26% in 2002 to under 10% by 2023. Felipe attributes this partly to a shift in academic settings, where the foundation for programming passion is often laid. The disconnect between industry innovation and university curricula is a critical issue that needs addressing to sustain Java’s future.

The Academic Lag: Java’s Outdated Image

In universities, Java’s reputation suffers from outdated teaching practices. Felipe notes that many institutions, including top U.S. universities, have shifted introductory courses from Java to Python, citing Java’s perceived complexity and age. A 2017 quote from a Stanford professor illustrates this sentiment, claiming Java “shows its age” and prompting a move to Python for introductory courses. Surveys of 70 leading U.S. universities confirm this trend, with Python now dominating as the primary teaching language, while Java is relegated to data structures or object-oriented programming courses.

Felipe’s own experience at UNC-Chapel Hill reflects this shift. A decade ago, Java dominated the curriculum, but by 2023, Python had overtaken introductory and database courses. This transition reinforces a perception among students that Java is verbose, bloated, and outdated. Felipe conducted a survey among 181 students in a software engineering course, revealing stark insights: 42% believed Python was in highest industry demand, 67% preferred Python for building REST APIs, and terms like “tedious,” “boring,” and “outdated” dominated a word cloud describing Java. One student even remarked that Java is suitable only for maintaining legacy code, a sentiment that underscores the stigma Felipe aims to dismantle.

The On-Ramp Challenge: Simplifying Java’s Introduction

A significant barrier to Java’s adoption in academia is its steep learning curve for beginners. Felipe contrasts Python’s straightforward “hello world” with Java’s intimidating boilerplate code, such as public static void main. This complexity overwhelms novices, who grapple with concepts like classes and static methods without clear explanations. Instructors often dismiss these as “magic,” which disengages students and fosters a negative perception. Felipe highlights Java’s JEP 445, which introduces unnamed classes and instance main methods to reduce boilerplate, as a promising step to make Java more accessible. By simplifying the initial experience, such innovations could align Java’s on-ramp with Python’s ease, engaging students early and encouraging exploration.

Beyond the language itself, the Java ecosystem poses additional challenges. Installing Java is daunting for beginners, with multiple Oracle websites offering conflicting instructions. Felipe recounts his own struggle as a student, only navigating this thanks to his father’s guidance. Tools like SDKMan and JBang simplify installation and scripting, but these are often unknown to students outside the Java community. Similarly, choosing an IDE—IntelliJ, Eclipse, or VS Code—adds another layer of complexity. Felipe advocates for clear, standardized guidance, such as recommending SDKMan and IntelliJ, to streamline the learning process and make Java’s ecosystem more approachable.

Bridging the Divide: Community and Mentorship

To reverse the declining trend in academia, Felipe proposes actionable steps centered on community engagement. He emphasizes the need for industry professionals to connect with universities, citing examples like Tom from Info Support, who collaborates with local schools to demonstrate Java’s real-world applications. By mentoring students and updating professors on modern tools like Maven, Gradle, and Quarkus, industry can reshape Java’s image. Felipe also encourages inviting students to Java User Groups (JUGs), where they can interact with professionals and discover tools that enhance Java development. These initiatives, he argues, plant seeds of enthusiasm that students will share with peers, amplifying Java’s appeal.

Felipe stresses that small actions, like a 10-minute conversation with a student, can make a significant impact. By demystifying stereotypes—such as Java being slow or bloated—and showcasing frameworks like Quarkus with hot reload capabilities, professionals can counter misconceptions. He also addresses the lack of Java-focused workshops compared to Python and JavaScript, urging the community to actively reach out to students. This collective effort, Felipe believes, is crucial to ensuring the next generation of developers sees Java as a vibrant, modern language, not a relic of the past.

Links:

  • University of North Carolina at Chapel Hill

  • Duke University

Hashtags: #Java #SoftwareDevelopment #Education #Quarkus #GraalVM #UNCChapelHill #DukeUniversity #FelipeYanaga

PostHeaderIcon [KotlinConf’2023] Coroutines and Loom: A Deep Dive into Goals and Implementations

The advent of OpenJDK’s Project Loom and its virtual threads has sparked considerable discussion within the Java and Kotlin communities, particularly regarding its relationship with Kotlin Coroutines. Roman Elizarov, Project Lead for Kotlin at JetBrains, addressed this topic head-on at KotlinConf’23 in his talk, “Coroutines and Loom behind the scenes”. His goal was not just to answer whether Loom would make coroutines obsolete (the answer being a clear “no”), but to delve into the distinct design goals, implementations, and trade-offs of each, clarifying how they can coexist and even complement each other. Information about Project Loom can often be found via OpenJDK resources or articles like those on Baeldung.

Roman began by noting that Project Loom, introducing virtual threads to the JVM, was nearing stability, targeted for Java 21 (late 2023). He emphasized that understanding the goals behind each technology is crucial, as these goals heavily influence their design and optimal use cases.

Project Loom: Simplifying Server-Side Concurrency

Project Loom’s primary design goal, as Roman Elizarov explained, is to preserve the thread-per-request programming style prevalent in many existing Java server-side applications, while dramatically increasing scalability. Traditionally, assigning one platform thread per incoming request becomes a bottleneck due to the high cost of platform threads. Virtual threads aim to solve this by providing lightweight, JVM-managed threads that can run existing synchronous, blocking Java code with minimal or no changes. This allows legacy applications to scale much better without requiring a rewrite to asynchronous or reactive patterns.

Loom achieves this by “unmounting” a virtual thread from its carrier (platform) thread when it encounters a blocking operation (like I/O) that has been integrated with Loom. The carrier thread is then free to run other virtual threads. When the blocking operation completes, the virtual thread is “remounted” on a carrier thread to continue execution. This mechanism is largely transparent to the application code. However, Roman pointed out a potential pitfall: if blocking operations occur within synchronized blocks or native JNI calls that haven’t been adapted for Loom, the carrier thread can get “pinned,” preventing unmounting and potentially negating some of Loom’s benefits in those specific scenarios.

Kotlin Coroutines: Fine-Grained, Structured Concurrency

In contrast, Kotlin Coroutines were designed with different primary goals:

  1. Enable fine-grained concurrency: Allowing developers to easily launch tens of thousands or even millions of concurrent tasks without performance issues, suitable for highly concurrent applications like UI event handling or complex data processing pipelines.
  2. Provide structured concurrency: Ensuring that the lifecycle of coroutines is managed within scopes, simplifying cancellation and preventing resource leaks. This is particularly critical for UI applications where tasks need to be cancelled when UI components are destroyed.

Kotlin Coroutines achieve this through suspendable functions (suspend fun) and a compiler-based transformation. When a coroutine suspends, it doesn’t block its underlying thread; instead, its state is saved, and the thread is released to do other work. This is fundamentally different from Loom’s approach, which aims to make blocking calls non-problematic for virtual threads. Coroutines explicitly distinguish between suspending and non-suspending code, a design choice that enables features like structured concurrency but requires a different programming model than traditional blocking code.

Comparing Trade-offs and Performance

Roman Elizarov presented a detailed comparison:

  • Programming Model: Loom aims for compatibility with existing blocking code. Coroutines introduce a new model with suspend functions, which is more verbose for simple blocking calls but enables powerful features like structured concurrency and explicit cancellation. Forcing blocking calls into a coroutine world requires wrappers like withContext(Dispatchers.IO), while Loom handles blocking calls transparently on virtual threads.
  • Cost of Operations:
    • Launching: Launching a coroutine is significantly cheaper than starting even a virtual thread, as coroutines are lighter weight objects.
    • Yielding/Suspending: Suspending a coroutine is generally cheaper than a virtual thread yielding (unmounting/remounting), due to compiler optimizations in Kotlin for state machine management. Roman showed benchmarks indicating lower memory allocation and faster execution for coroutine suspension compared to virtual thread context switching in preview builds of Loom.
  • Error Handling & Cancellation: Coroutines have built-in, robust support for structured cancellation. Loom’s virtual threads rely on Java’s traditional thread interruption mechanisms, which are less integrated into the programming model for cooperative cancellation.
  • Debugging: Loom’s virtual threads offer a debugging experience very similar to traditional threads, with understandable stack traces. Coroutines, due to their state-machine nature, can sometimes have more complex stack traces, though IDE support has improved this.

Coexistence and Future Synergies

Roman Elizarov concluded that Loom and coroutines are designed for different primary use cases and will coexist effectively.

  • Loom excels for existing Java applications using the thread-per-request model that need to scale without major rewrites.
  • Coroutines excel for applications requiring fine-grained, highly concurrent operations, structured concurrency, and explicit cancellation management, often seen in UI applications or complex backend services with many interacting components.

He also highlighted a potential future synergy: Kotlin Coroutines could leverage Loom’s virtual threads for their Dispatchers.IO (or a similar dispatcher) when running on newer JVMs. This could allow blocking calls within coroutines (those wrapped in withContext(Dispatchers.IO)) to benefit from Loom’s efficient handling of blocking operations, potentially eliminating the need for a large, separate thread pool for I/O-bound tasks in coroutines. This would combine the benefits of both: coroutines for structured, fine-grained concurrency and Loom for efficient handling of any unavoidable blocking calls.

Links:

Hashtags: #Kotlin #Coroutines #ProjectLoom #Java #JVM #Concurrency #AsynchronousProgramming #RomanElizarov #JetBrains

PostHeaderIcon [KotlinConf’23] The Future of Kotlin is Bright and Multiplatform

KotlinConf’23 kicked off with an energizing keynote, marking a highly anticipated return to an in-person format in Amsterdam. Hosted by Hadi Hariri from JetBrains, the session brought together key figures from both JetBrains and Google, including Roman Elizarov, Svetlana Isakova, Egor Tolstoy, and Grace Kloba (VP of Engineering for Android Developer Experience at Google), to share exciting updates and future directions for the Kotlin language and its ecosystem. The conference also boasted a global reach with KotlinConf Global events held across 41 countries. For those unable to attend, the key announcements from the keynote are also available in a comprehensive blog post on the official Kotlin blog.

The keynote began by celebrating Kotlin’s impressive growth, with compelling statistics underscoring its widespread adoption, particularly in Android development where it stands as the most popular language, utilized in over 95% of the top 1000 Android applications. A significant emphasis was placed on the forthcoming Kotlin 2.0, which is centered around the revolutionary new K2 compiler. This compiler promises significant performance improvements, enhanced stability, and a robust foundation for the language’s future evolution. The K2 compiler is nearing completion and is slated for release as Kotlin 2.0. Additionally, the IntelliJ IDEA plugin will also adopt the K2 frontend, ensuring alignment with IntelliJ releases and a consistent developer experience.

The Evolution of Kotlin: K2 Compiler and Language Features

The K2 compiler was a central theme of the keynote, signifying a major milestone for Kotlin. This re-architected compiler frontend, which also powers the IDE, is designed to be faster, more stable, and to enable quicker development of new language features and tooling capabilities. Kotlin 2.0, built upon the K2 compiler, is set to bring these profound benefits to all Kotlin developers, improving both compiler performance and IDE responsiveness.

Beyond the immediate horizon of Kotlin 2.0, the speakers provided a glimpse into potential future language features that are currently under consideration. These exciting prospects included:

Prospective Language Enhancements

  • Static Extensions: This feature aims to allow static resolution of extension functions, which could potentially improve performance and code clarity.
  • Collection Literals: The introduction of a more concise syntax for creating collections, such as using square brackets for lists, with efficient underlying implementations, is on the cards.
  • Name-Based Destructuring: Offering a more flexible way to destructure objects based on property names rather than simply their positional order.
  • Context Receivers: A powerful capability designed to provide contextual information to functions in a more implicit and structured manner. This feature, however, is being approached with careful consideration to ensure it aligns well with Kotlin’s core principles and doesn’t introduce undue complexity.
  • Explicit Fields: This would provide developers with more direct control over the backing fields of properties, offering greater flexibility in certain scenarios.

The JetBrains team underscored a cautious and deliberate approach to language evolution, ensuring that any new features are meticulously designed and maintainable within the Kotlin ecosystem. Compiler plugins were also highlighted as a powerful mechanism for extending Kotlin’s capabilities without altering its core.

Kotlin in the Ecosystem: Google’s Investment and Multiplatform Growth

Grace Kloba from Google took the stage to reiterate Google’s strong and unwavering commitment to Kotlin. She shared insights into Google’s substantial investments in the Kotlin ecosystem, including the development of Kotlin Symbol Processing (KSP) and the continuous emphasis on Kotlin as the default choice for Android development. Google officially championed Kotlin for Android development as early as 2017, a pivotal moment for the language’s widespread adoption. Furthermore, the Kotlin DSL is now the default for Gradle build scripts within Android Studio, significantly enhancing the developer experience with features such as semantic syntax highlighting and advanced code completion. Google also actively contributes to the Kotlin Foundation and encourages community participation through initiatives like the Kotlin Foundation Grants Program, which specifically focuses on supporting multiplatform libraries and frameworks.

Kotlin Multiplatform (KMP) emerged as another major highlight of the keynote, emphasizing its increasing maturity and widespread adoption. The overarching vision for KMP is to empower developers to share code across a diverse range of platforms—Android, iOS, desktop, web, and server-side—while retaining the crucial ability to write platform-specific code when necessary for optimal integration and performance. The keynote celebrated the burgeoning number of multiplatform libraries and tools, including KMM Bridge, which are simplifying KMP development workflows. The future of KMP appears exceptionally promising, with ongoing efforts to further enhance the developer experience and expand its capabilities across even more platforms.

Compose Multiplatform and Emerging Technologies

The keynote also featured significant advancements in Compose Multiplatform, JetBrains’ declarative UI framework for building cross-platform user interfaces. A particularly impactful announcement was the alpha release of Compose Multiplatform for iOS. This groundbreaking development allows developers to write their UI code once in Kotlin and deploy it seamlessly across Android and iOS, and even to desktop and web targets. This opens up entirely new avenues for code sharing and promises accelerated development cycles for mobile applications, breaking down traditional platform barriers.

Finally, the JetBrains team touched upon Kotlin’s expansion into truly emerging technologies, such as WebAssembly (Wasm). JetBrains is actively developing a new compiler backend for Kotlin specifically targeting WebAssembly, coupled with its own garbage collection proposal. This ambitious effort aims to deliver high-performance Kotlin code directly within the browser environment. Experiments involving the execution of Compose applications within the browser using WebAssembly were also mentioned, hinting at a future where Kotlin could offer a unified development experience across an even broader spectrum of platforms. The keynote concluded with an enthusiastic invitation to the community to delve deeper into these subjects during the conference sessions and to continue contributing to Kotlin’s vibrant and ever-expanding ecosystem.

Hashtags: #Keynote #JetBrains #Google #K2Compiler #Kotlin2 #Multiplatform #ComposeMultiplatform #WebAssembly

PostHeaderIcon [DevoxxBE2023] A Deep Dive into Advanced TypeScript: A Live Coding Expedition by Christian Wörz

Christian Wörz, a seasoned full-stack engineer and freelancer, captivated the Devoxx Belgium 2023 audience with a hands-on exploration of advanced TypeScript features. Through live coding, Christian illuminated powerful yet underutilized constructs like mapped types, template literals, conditional types, and recursion, demonstrating their practical applications in real-world scenarios. His session, blending technical depth with accessibility, empowered developers to leverage TypeScript’s full potential for creating robust, type-safe codebases.

Mastering Mapped Types and Template Literals

Christian kicked off with mapped types, showcasing their ability to dynamically generate type-safe structures. He defined an Events type with add and remove properties and created an OnEvent type to prefix event keys with “on” (e.g., onAdd, onRemove). Using the keyof operator and template literal syntax, he ensured that OnEvent mirrored Events, enforcing consistency. For instance, adding a move event to Events required updating OnEvent to onMove, providing compile-time safety. He enhanced this with TypeScript’s intrinsic Capitalize function to uppercase property names, ensuring precise naming conventions.

Template literals were explored through a chessboard example, where Christian generated all possible positions (e.g., A1, B2) by combining letter and number types. He extended this to a CSS validator, defining a GapCSS type for properties like margin-left and padding-top, paired with valid CSS sizes (e.g., rem, px). This approach narrowed string types to enforce specific formats, preventing errors like invalid CSS values at compile time.

Leveraging Conditional Types and Never for Safety

Christian delved into conditional types and the never type to enhance compile-time safety. He introduced a NoEmptyString type that prevents empty strings from being assigned, using a conditional check to return never for invalid inputs. Applying this to a failOnEmptyString function ensured that only non-empty strings were accepted, catching errors before runtime. He also demonstrated exhaustive switch cases, using never to enforce complete coverage. For a getCountryForLocation function, assigning an unhandled London case to a never-typed variable triggered a compile-time error, ensuring all cases were addressed.

Unraveling Types with Infer and Recursion

The infer keyword was a highlight, enabling Christian to extract type information dynamically. He created a custom MyReturnType to mimic TypeScript’s ReturnType, inferring a function’s return type (e.g., number for an addition function). This was particularly useful for complex type manipulations. Recursion was showcased through an UnnestArray type, unwrapping deeply nested arrays to access their inner types (e.g., extracting string from string[][][]). He also built a recursive Tuple type, generating fixed-length arrays with specified element types, such as a three-element RGB tuple, with an accumulator to collect elements during recursion.

Branded Types for Enhanced Type Safety

Christian concluded with branded types, a technique to distinguish specific string formats, like emails, without runtime overhead. By defining an Email type as a string intersected with an object containing a _brand property, he ensured that only validated strings could be assigned. A type guard function, isValidEmail, checked for an @ symbol, allowing safe usage in functions like sendEmail. This approach maintained the simplicity of primitive types while enforcing strict validation, applicable to formats like UUIDs or custom date strings.

Links: