Recent Posts
Archives

PostHeaderIcon [DevoxxFR 2024] Super Tech’Rex World: The Assembler Strikes Back

Nicolas Grohmann, a developer at Sopra Steria, took attendees on a nostalgic journey through low-level programming with his talk, “Super Tech’Rex World: The Assembler Strikes Back.” Over five years, Nicolas modified Super Mario World, a 1990 Super Nintendo Entertainment System (SNES) game coded in assembler, transforming it into a custom adventure featuring a dinosaur named T-Rex. Through live coding and engaging storytelling, he demystified assembler, revealing its principles and practical applications. His session illuminated the inner workings of 1990s consoles while showcasing assembler’s relevance to modern computing.

A Retro Quest Begins

Nicolas opened with a personal anecdote, recounting how his project began in 2018, before Sopra Steria’s Tech Me Up community formed in 2021. He described this period as the “Stone Age” of his journey, marked by trial and error. His goal was to hack Super Mario World, a beloved SNES title, replacing Mario with T-Rex, coins with pixels (a Sopra Steria internal currency), and mushrooms with certifications that boost strength. Enemies became “pirates,” symbolizing digital adversaries.

To set the stage, Nicolas showcased the SNES, a 1990s console with a CPU, ROM, and RAM—components familiar to modern developers. He launched an emulator to demonstrate Super Mario World, highlighting its mechanics: jumping, collecting items, and battling enemies. A modified ROM revealed his custom version, where T-Rex navigated a reimagined world. This demo captivated the audience, blending nostalgia with technical ambition.

For the first two years, Nicolas relied on community tools to tweak graphics and levels, such as replacing Mario’s sprite with T-Rex. However, as a developer, he yearned to contribute original code, prompting him to learn assembler. This shift marked the “Age of Discoveries,” where he tackled the language’s core concepts: machine code, registers, and memory addressing.

Decoding Assembler’s Foundations

Nicolas introduced assembler’s essentials, starting with machine code, the binary language of 0s and 1s that CPUs understand. Grouped into 8-bit bytes (octets), a SNES ROM comprises 1–4 megabytes of such code. He clarified binary and hexadecimal systems, noting that hexadecimal (0–9, A–F) compacts binary for readability. For example, 15 in decimal is 1111 in binary and 0F in hexadecimal, while 255 (all 1s in a byte) is FF.

Next, he explored registers, small memory locations within the CPU, akin to global variables. The accumulator, a key register, stores a single octet for operations, while the program counter tracks the next instruction’s address. These registers enable precise control over a program’s execution.

Memory addressing, Nicolas’s favorite concept, likens SNES memory to a city. Each octet resides in a “house” (address 00–FF), within a “street” (page 00–FF), in a “neighborhood” (bank 00–FF). This structure yields 16 megabytes of addressable memory. Addressing modes—long (full address), absolute (bank preset), and direct page (bank and page preset)—optimize code efficiency. Direct page, limited to 256 addresses, is ideal for game variables, streamlining operations.

Assembler, Nicolas clarified, isn’t a single language but a family of instruction sets tailored to CPU types. Opcodes, mnemonic instructions like LDA (load accumulator) and STA (store accumulator), translate to machine code (e.g., LDA becomes A5 for direct page). These opcodes, combined with addressing modes, form the backbone of assembler programming.

Live Coding: Empowering T-Rex

Nicolas transitioned to live coding, demonstrating assembler’s practical application. His goal: make T-Rex invincible and alter gameplay to challenge pirates. Using Super Mario World’s memory map, a community-curated resource, he targeted address 7E0019, which tracks the player’s state (0 for small, 1 for large). By writing LDA #$01 (load 1) and STA $19 (store to 7E0019), he ensured T-Rex remained large, immune to damage. The # denotes an immediate value, distinguishing it from an address.

To nerf T-Rex’s jump, Nicolas manipulated controller inputs at addresses 7E0015 and 7E0016, which store button states as bitmasks (e.g., the leftmost bit for button B, used for jumping). Using LDA $15 and AND #$7F (bitwise AND with 01111111), he cleared the B button’s bit, disabling jumps while preserving other controls. He applied this to both addresses, ensuring consistency.

To restore button B for firing projectiles, Nicolas used 7E0016, which flags buttons pressed in a single frame. With LDA $16AND #$80 (isolating B’s bit), and BEQ (branch if zero to skip firing), he ensured projectiles spawned only on B’s press. A JSL (jump to subroutine long) invoked a community routine to spawn a custom sprite—a projectile that moves right and destroys enemies.

These demos showcased assembler’s precision, leveraging memory maps and opcodes to reshape gameplay. Nicolas’s iterative approach—testing, tweaking, and re-running—mirrored real-world debugging.

Mastering the Craft: Hooks and the Stack

Reflecting on 2021, the “Modern Age,” Nicolas shared how he mastered code insertion. Since modifying Super Mario World’s original ROM risks corruption, he used hooks—redirects to free memory spaces. A tool inserts custom code at an address like $A00, replacing a segment (e.g., four octets) with a JSL (jump subroutine long) to a hook. The hook preserves original code, jumps to the custom code via JML (jump long), and returns with RTL (return long), seamlessly integrating modifications.

The stack, a RAM region for temporary data, proved crucial. Managed by a stack pointer register, it supports opcodes like PHA (push accumulator) and PLA (pull accumulator). JSL pushes the return address before jumping, and RTL pops it, ensuring correct returns. This mechanism enabled complex routines without disrupting the game’s flow.

Nicolas introduced index registers X and Y, which support opcodes like LDX and STX. Indexed addressing (e.g., LDA $00,X) adds X’s value to an address, enabling dynamic memory access. For example, setting X to 2 and using LDA $00,X accesses address $02.

Conquering the Game and Beyond

In a final demo, Nicolas teleported T-Rex to the game’s credits by checking sprite states. Address 7E14C8 and the next 11 addresses track 12 sprite slots (0 for empty). Using X as a counter, he looped through LDA $14C8,X, branching with BNE (branch if not zero) if a sprite exists, or decrementing X with DEX and looping with BPL (branch if positive). If all slots are empty, a JSR (jump subroutine) triggers the credits, ending the game.

Nicolas concluded with reflections on his five-year journey, likening assembler to a steep but rewarding climb. His game, nearing release on the Super Mario World hacking community’s site, features space battles and a 3D boss, pushing SNES limits. He urged developers to embrace challenging learning paths, emphasizing that persistence yields profound satisfaction.

Hashtags: #Assembler #DevoxxFrance #SuperNintendo #RetroGaming #SopraSteria #LowLevelProgramming

PostHeaderIcon Understanding Dependency Management and Resolution: A Look at Java, Python, and Node.js

Understanding Dependency Management and Resolution: A Look at Java, Python, and Node.js

Mastering how dependencies are handled can define your project’s success or failure. Let’s explore the nuances across today’s major development ecosystems.

Introduction

Every modern application relies heavily on external libraries. These libraries accelerate development, improve security, and enable integration with third-party services. However, unmanaged dependencies can lead to catastrophic issues — from version conflicts to severe security vulnerabilities. That’s why understanding dependency management and resolution is absolutely essential, particularly across different programming ecosystems.

What is Dependency Management?

Dependency management involves declaring external components your project needs, installing them properly, ensuring their correct versions, and resolving conflicts when multiple components depend on different versions of the same library. It also includes updating libraries responsibly and securely over time. In short, good dependency management prevents issues like broken builds, “dependency hell”, or serious security holes.

Java: Maven and Gradle

In the Java ecosystem, dependency management is an integrated and structured part of the build lifecycle, using tools like Maven and Gradle.

Maven and Dependency Scopes

Maven uses a declarative pom.xml file to list dependencies. A particularly important notion in Maven is the dependency scope.

Scopes control where and how dependencies are used. Examples include:

  • compile (default): Needed at both compile time and runtime.
  • provided: Needed for compile, but provided at runtime by the environment (e.g., Servlet API in a container).
  • runtime: Needed only at runtime, not at compile time.
  • test: Used exclusively for testing (JUnit, Mockito, etc.).
  • system: Provided by the system explicitly (deprecated practice).

<dependency>
  <groupId>junit</groupId>
  <artifactId>junit</artifactId>
  <version>4.13.2</version>
  <scope>test</scope>
</dependency>
    

This nuanced control allows Java developers to avoid bloating production artifacts with unnecessary libraries, and to fine-tune build behaviors. This is a major feature missing from simpler systems like pip or npm.

Gradle

Gradle, offering both Groovy and Kotlin DSLs, also supports scopes through configurations like implementation, runtimeOnly, testImplementation, which have similar meanings to Maven scopes but are even more flexible.


dependencies {
    implementation 'org.springframework.boot:spring-boot-starter'
    testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
    

Python: pip and Poetry

Python dependency management is simpler, but also less structured compared to Java. With pip, there is no formal concept of scopes.

pip

Developers typically separate main dependencies and development dependencies manually using different files:

  • requirements.txt – Main project dependencies.
  • requirements-dev.txt – Development and test dependencies (pytest, tox, etc.).

This manual split is prone to human error and lacks the rigorous environment control that Maven or Gradle enforce.

Poetry

Poetry improves the situation by introducing a structured division:


[tool.poetry.dependencies]
requests = "^2.31"

[tool.poetry.dev-dependencies]
pytest = "^7.1"
    

Poetry brings concepts closer to Maven scopes, but they are still less fine-grained (no runtime/compile distinction, for instance).

Node.js: npm and Yarn

JavaScript dependency managers like npm and yarn allow a simple distinction between regular and development dependencies.

npm

Dependencies are declared in package.json under different sections:

  • dependencies – Needed in production.
  • devDependencies – Needed only for development (e.g., testing libraries, linters).

{
  "dependencies": {
    "express": "^4.18.2"
  },
  "devDependencies": {
    "mocha": "^10.2.0"
  }
}
    

While convenient, npm’s dependency management lacks Maven’s level of strictness around dependency resolution, often leading to version mismatches or “node_modules bloat.”

Key Differences Between Ecosystems

When switching between Java, Python, and Node.js environments, developers must be aware of the following fundamental differences:

1. Formality of Scopes

Java’s Maven/Gradle ecosystem defines scopes formally at the dependency level. Python (pip) and JavaScript (npm) ecosystems use looser, file- or section-based categorization.

2. Handling of Transitive Dependencies

Maven and Gradle resolve and include transitive dependencies automatically with sophisticated conflict resolution strategies (e.g., nearest version wins). pip historically had weak transitive dependency handling, leading to issues unless careful pinning is done. npm introduced better nested module flattening with npm v7+ but conflicts still occur in complex trees.

3. Lockfiles

npm/yarn and Python Poetry use lockfiles (package-lock.json, yarn.lock, poetry.lock) to ensure consistent dependency installations across machines. Maven and Gradle historically did not need lockfiles because they strictly followed declared versions and scopes. However, Gradle introduced lockfile support with dependency locking in newer versions.

4. Dependency Updating Strategy

Java developers often manually manage dependency versions inside pom.xml or use dependencyManagement blocks for centralized control. pip requires updating requirements.txt or regenerating them via pip freeze. npm/yarn allows semver rules (“^”, “~”) but auto-updating can lead to subtle breakages if not careful.

Best Practices Across All Languages

  • Pin exact versions wherever possible to avoid surprise updates.
  • Use lockfiles and commit them to version control (Git).
  • Separate production and development/test dependencies explicitly.
  • Use dependency scanners (e.g., OWASP Dependency-Check, Snyk, npm audit) regularly to detect vulnerabilities.
  • Prefer stable, maintained libraries with good community support and recent commits.

Conclusion

Dependency management, while often overlooked early in projects, becomes critical as applications scale. Maven and Gradle offer the most fine-grained controls via dependency scopes and conflict resolution. Python and JavaScript ecosystems are evolving rapidly, but require developers to be much more careful manually. Understanding these differences, and applying best practices accordingly, will ensure smoother builds, faster delivery, and safer production systems.

Interested in deeper dives into dependency vulnerability scanning, SBOM generation, or automatic dependency update pipelines? Subscribe to our blog for more in-depth content!

PostHeaderIcon [Devoxx FR 2024] Mastering Reproducible Builds with Apache Maven: Insights from Hervé Boutemy


Introduction

In a recent presentation, Hervé Boutemy, a veteran Maven maintainer, Apache Software Foundation member, and Solution Architect at Sonatype, delivered a compelling talk on reproducible builds with Apache Maven. With over 20 years of experience in Java, CI/CD, DevOps, and software supply chain security, Hervé shared his five-year journey to make Maven builds reproducible, a critical practice for achieving the highest level of trust in software, as defined by SLSA Level 4. This post dives into the key concepts, practical steps, and surprising benefits of reproducible builds, based on Hervé’s insights and hands-on demonstrations.

What Are Reproducible Builds?

Reproducible builds ensure that compiling the same source code, with the same environment and build tools, produces identical binaries, byte-for-byte. This practice verifies that the distributed binary matches the source code, eliminating risks like malicious tampering or unintended changes. Hervé highlighted the infamous XZ incident, where discrepancies between source tarballs and Git repositories went unnoticed—reproducible builds could have caught this by ensuring the binary matched the expected source.

Originally pioneered by Linux distributions like Debian in 2013, reproducible builds have gained traction in the Java ecosystem. Hervé’s work has led to over 2,000 verified reproducible releases from 500+ open-source projects on Maven Central, with stats growing weekly.

Why Reproducible Builds Matter

Reproducible builds are primarily about security. They allow anyone to rebuild a project and confirm that the binary hasn’t been compromised (e.g., no backdoors or “foireux” additions, as Hervé humorously put it). But Hervé’s five-year experience revealed additional benefits:

  • Build Validation: Ensure patches or modifications don’t introduce unintended changes. A “build successful” message doesn’t guarantee the binary is correct—reproducible builds do.
  • Data Leak Prevention: Hervé found sensitive data (e.g., usernames, machine names, even a PGP passphrase!) embedded in Maven Central artifacts, exposing personal or organizational details.
  • Enterprise Trust: When outsourcing development, reproducible builds verify that a vendor’s binary matches the provided source, saving time and reducing risk.
  • Build Efficiency: Reproducible builds enable caching optimizations, improving build performance.

These benefits extend beyond security, making reproducible builds a powerful tool for developers, enterprises, and open-source communities.

Implementing Reproducible Builds with Maven

Hervé outlined a practical workflow to achieve reproducible builds, demonstrated through his open-source project, reproducible-central, which includes scripts and rebuild recipes for 3,500+ compilations across 627+ projects. Here’s how to make your Maven builds reproducible:

Step 1: Rebuild and Verify

Start by rebuilding a project from its source (e.g., a Git repository tag) and comparing the output binary to a reference (e.g., Maven Central or an internal repository). Hervé’s rebuild.sh script automates this:

  • Specify the Environment: Define the JDK (e.g., JDK 8 or 17), OS (Windows, Linux, FreeBSD), and Maven command (e.g., mvn clean verify -DskipTests).
  • Use Docker: The script creates a Docker image with the exact environment (JDK, OS, Maven version) to ensure consistency.
  • Compare Binaries: The script downloads the reference binary and checks if the rebuilt binary matches, reporting success or failure.

Hervé demonstrated this with the Maven Javadoc Plugin (version 3.5.0), showing a 100% reproducible build when the environment matched the original (e.g., JDK 8 on Windows).

Step 2: Diagnose Differences

If the binaries don’t match, use diffoscope, a tool from the Linux reproducible builds community, to analyze differences. Diffoscope compares archives (e.g., JARs), nested archives, and even disassembles bytecode to pinpoint issues like:

  • Timestamps: JARs include file timestamps, which vary by build time.
  • File Order: ZIP-based JARs don’t guarantee consistent file ordering.
  • Bytecode Variations: Different JDK major versions produce different bytecode, even for the same target (e.g., targeting Java 8 with JDK 17 vs. JDK 8).
  • Permissions: File permissions (e.g., group write access) differ across environments.

Hervé showed a case where a build failed due to a JDK mismatch (JDK 11 vs. JDK 8), which diffoscope revealed through bytecode differences.

Step 3: Configure Maven for Reproducibility

To make builds reproducible, address common sources of “noise” in Maven projects:

  • Fix Timestamps: Set a consistent timestamp using the project.build.outputTimestamp property, managed by the Maven Release or Versions plugins. This ensures JARs have identical timestamps across builds.
  • Upgrade Plugins: Many Maven plugins historically introduced variability (e.g., random timestamps or environment-specific data). Hervé contributed fixes to numerous plugins, and his artifact:check-buildplan goal identifies outdated plugins, suggesting upgrades to reproducible versions.
  • Avoid Non-Reproducible Outputs: Skip Javadoc generation (highly variable) and GPG signing (non-reproducible by design) during verification.

For example, Hervé explained that configuring project.build.outputTimestamp and upgrading plugins eliminated timestamp and file-order issues in JARs, making builds reproducible.

Step 4: Test Locally

Before scaling, test reproducibility locally using mvn verify (not install, which pollutes the local repository). The artifact:compare goal compares your build output to a reference binary (e.g., from Maven Central or an internal repository). For internal projects, specify your repository URL as a parameter.

To test without a remote repository, build twice locally: run mvn install for the first build, then mvn verify for the second, comparing the results. This catches issues like unfixed dates or environment-specific data.

Step 5: Scale and Report

For large-scale verification, adapt Hervé’s reproducible-central scripts to your internal repository. These scripts generate reports with group IDs, artifact IDs, and reproducibility scores, helping track progress across releases. Hervé’s stats (e.g., 100% reproducibility for some projects, partial for others) provide a model for enterprise reporting.

Challenges and Lessons Learned

Hervé shared several challenges and insights from his journey:

  • JDK Variability: Bytecode differs across major JDK versions, even for the same target. Always match the original JDK major version (e.g., JDK 8 for a Java 8 target).
  • Environment Differences: Windows vs. Linux line endings (CRLF vs. LF) or file permissions (e.g., group write access) can break reproducibility. Docker ensures consistent environments.
  • Plugin Issues: Older plugins introduced variability, but Hervé’s contributions have made modern versions reproducible.
  • Unexpected Findings: Reproducible builds uncovered sensitive data in Maven Central artifacts, highlighting the need for careful build hygiene.

One surprising lesson came from file permissions: Hervé discovered that newer Linux distributions default to non-writable group permissions, unlike older ones, requiring adjustments to build recipes.

Interactive Learning: The Quiz

Hervé ended with a fun quiz to test the audience’s understanding, presenting rebuild results and asking, “Reproducible or not?” Examples included:

  • Case 1: A Maven Javadoc Plugin 3.5.0 build matched the reference perfectly (reproducible).
  • Case 2: A build showed bytecode differences due to a JDK mismatch (JDK 11 vs. JDK 8, not reproducible).
  • Case 3: A build differed only in file permissions (group write access), fixable by adjusting the environment (reproducible with a corrected recipe).

The quiz reinforced a key point: reproducibility requires precise environment matching, but tools like diffoscope make debugging straightforward.

Getting Started

Ready to make your Maven builds reproducible? Follow these steps:

  1. Clone reproducible-central and explore Hervé’s scripts and stats.
  2. Run mvn artifact:check-buildplan to identify and upgrade non-reproducible plugins.
  3. Set project.build.outputTimestamp in your POM file to fix JAR timestamps.
  4. Test locally with mvn verify and artifact:compare, specifying your repository if needed.
  5. Scale up using rebuild.sh and Docker for consistent environments, adapting to your internal repository.

Hervé encourages feedback to improve his tools, so if you hit issues, reach out via the project’s GitHub or Apache’s community channels.

Conclusion

Reproducible builds with Maven are not only achievable but transformative, offering security, trust, and operational benefits. Hervé Boutemy’s work demystifies the process, providing tools, scripts, and a clear roadmap to success. From preventing backdoors to catching configuration errors and sensitive data leaks, reproducible builds are a must-have for modern Java development.

Start small with artifact:check-buildplan, test locally, and scale with reproducible-central. As Hervé’s 3,500+ rebuilds show, the Java community is well on its way to making reproducibility the norm. Join the movement, and let’s build software we can trust!

Resources

PostHeaderIcon [Devoxx FR 2024] Instrumenting Java Applications with OpenTelemetry: A Comprehensive Guide


Introduction

In a recent presentation at a Paris JUG event, Bruce Bujon, an R&D Engineer at Datadog and an open-source developer, delivered an insightful talk on instrumenting Java applications with OpenTelemetry. This powerful observability framework is transforming how developers monitor and analyze application performance, infrastructure, and security. In this detailed post, we’ll explore the key concepts from Bruce’s presentation, breaking down OpenTelemetry, its components, and practical steps to implement it in Java applications.

What is OpenTelemetry?

OpenTelemetry is an open-source observability framework designed to collect, process, and export telemetry data in a vendor-agnostic manner. It captures data from various sources—such as virtual machines, databases, and applications—and exports it to observability backends for analysis. Importantly, OpenTelemetry focuses solely on data collection and management, leaving visualization and analysis to backend tools like Datadog, Jaeger, or Grafana.

The framework supports three primary signals:

  • Traces: These map the journey of requests through an application, highlighting the time taken by each component or microservice.
  • Logs: Timestamped events, such as user actions or system errors, familiar to most developers.
  • Metrics: Aggregated numerical data, like request rates, error counts, or CPU usage over time.

In his talk, Bruce focused on traces, which are particularly valuable for understanding performance bottlenecks in distributed systems.

Why Use OpenTelemetry for Java Applications?

For Java developers, OpenTelemetry offers a standardized way to instrument applications, ensuring compatibility with various observability backends. Its flexibility allows developers to collect telemetry data without being tied to a specific tool, making it ideal for diverse tech stacks. Bruce highlighted its growing adoption, noting that OpenTelemetry is the second most active project in the Cloud Native Computing Foundation (CNCF), behind only Kubernetes.

Instrumenting a Java Application: A Step-by-Step Guide

Bruce demonstrated three approaches to instrumenting Java applications with OpenTelemetry, using a simple example of two web services: an “Order” service and a “Storage” service. The goal was to trace a request from the Order service, which calls the Storage service to check stock levels for items like hats, bags, and socks.

Approach 1: Manual Instrumentation with OpenTelemetry API and SDK

The first approach involves manually instrumenting the application using the OpenTelemetry API and SDK. This method offers maximum control but requires significant development effort.

Steps:

  1. Add Dependencies: Include the OpenTelemetry Bill of Materials (BOM) to manage library versions, along with the API, SDK, OTLP exporter, and semantic conventions.
  2. Initialize the SDK: Set up a TracerProvider with a resource defining the service (e.g., “storage”) and attributes like service name and deployment environment.
  3. Create a Tracer: Use the Tracer to generate spans for specific operations, such as a web route or internal method.
  4. Instrument Routes: For each route or method, create a span using a SpanBuilder, set attributes (e.g., span kind as “server”), and mark the start and end of the span.
  5. Export Data: Configure the SDK to export spans to an OpenTelemetry Collector via the OTLP protocol.

Example Output: Bruce showed a trace with two spans—one for the route and one for an internal method—displayed in Datadog’s APM view, with attributes like service name and HTTP method.

Pros: Fine-grained control over instrumentation.

Cons: Verbose and time-consuming, especially for large applications or libraries with private APIs.

Approach 2: Framework Support with Spring Boot

The second approach leverages framework-specific integrations, such as Spring Boot’s OpenTelemetry starter, to automate instrumentation.

Steps:

  1. Add Spring Boot Starter: Include the OpenTelemetry starter, which bundles the API, SDK, exporter, and autoconfigure dependencies.
  2. Configure Environment Variables: Set variables for the service name, OTLP endpoint, and other settings.
  3. Run the Application: The starter automatically instruments web routes, capturing HTTP methods, routes, and response codes.

Example Output: Bruce demonstrated a trace for the Order service, with spans automatically generated for routes and tagged with HTTP metadata.

Pros: Minimal code changes and good generic instrumentation.

Cons: Limited customization and varying support across frameworks (e.g., Spring Boot doesn’t support JDBC out of the box).

Approach 3: Auto-Instrumentation with JVM Agent

The third and most powerful approach uses the OpenTelemetry JVM agent for automatic instrumentation, requiring minimal code changes.

Steps:

  1. Add the JVM Agent: Attach the OpenTelemetry Java agent to the JVM using a command-line option (e.g., -javaagent:opentelemetry-javaagent.jar).
  2. Configure Environment Variables: Use autoconfigure variables (around 80 options) to customize the agent’s behavior.
  3. Remove Manual Instrumentation: Eliminate SDK, exporter, and framework dependencies, keeping only the API and semantic conventions for custom instrumentation.
  4. Run the Application: The agent instruments web servers, clients, and libraries (e.g., JDBC, Kafka) at runtime.

Example Output: Bruce showcased a complete distributed trace, including spans for both services, web clients, and servers, with context propagation handled automatically.

Pros: Comprehensive instrumentation with minimal effort, supporting over 100 libraries.

Cons: Potential conflicts with other JVM agents (e.g., security tools) and limited support for native images (e.g., Quarkus).

Context Propagation: Linking Traces Across Services

A critical aspect of distributed tracing is context propagation, ensuring that spans from different services are linked within a single trace. Bruce explained that without propagation, the Order and Storage services generated separate traces.

To address this, OpenTelemetry uses HTTP headers (e.g., W3C’s traceparent and tracestate) to carry tracing context. In the manual approach, Bruce implemented a RestTemplate interceptor in Spring to inject headers and a Quarkus filter to extract them. The JVM agent, however, handles this automatically, simplifying the process.

Additional Considerations

  • Baggage: In response to an audience question, Bruce clarified that OpenTelemetry’s baggage feature allows propagating business-specific metadata across services, complementing tracing context.
  • Cloud-Native Support: While cloud providers like AWS Lambda have proprietary monitoring solutions, their native support for OpenTelemetry varies. Bruce suggested further exploration for specific use cases like batch jobs or serverless functions.
  • Performance: The JVM agent modifies bytecode at runtime, which may impact startup time but generally has negligible runtime overhead.

Conclusion

OpenTelemetry is a game-changer for Java developers seeking to enhance application observability. As Bruce demonstrated, it offers three flexible approaches—manual instrumentation, framework support, and auto-instrumentation—catering to different needs and expertise levels. The JVM agent stands out for its ease of use and comprehensive coverage, making it an excellent starting point for teams new to OpenTelemetry.

To get started, add the OpenTelemetry Java agent to your application with a single command-line option and configure it via environment variables. This minimal setup allows you to immediately observe your application’s behavior and assess OpenTelemetry’s value for your team.

The code and slides from Bruce’s presentation are available on GitHub, providing a practical reference for implementing OpenTelemetry in your projects. Whether you’re monitoring microservices or monoliths, OpenTelemetry empowers you to gain deep insights into your applications’ performance and behavior.

Resources

PostHeaderIcon [KotlinConf2023] Java and Kotlin: A Mutual Evolution

At KotlinConf2024, John Pampuch, Google’s production languages lead, delivered a history lesson on Java and Kotlin’s intertwined journeys. Battling jet lag with humor, John traced nearly three decades of Java and twelve years of Kotlin, emphasizing their complementary strengths. From Java’s robust ecosystem to Kotlin’s pragmatic innovation, the languages have shaped each other, accelerating progress. John’s talk, rooted in his experience since Java’s 1996 debut, explored design goals, feature cross-pollination, and future implications, urging developers to leverage Kotlin’s developer-friendly features while appreciating Java’s stability.

Design Philosophies: Pragmatism Meets Robustness

John opened by contrasting the languages’ origins. Java, launched in 1995, aimed for simplicity, security, and portability, aligning tightly with the JVM and JDK. Its ecosystem, bolstered by libraries and tooling, set a standard for enterprise development. Kotlin, announced in 2011 by JetBrains, prioritized pragmatism: concise syntax, interoperability with Java, and multiplatform flexibility. Unlike Java’s JVM dependency, Kotlin targets iOS, web, and beyond, enabling faster feature rollouts. John noted Kotlin’s design avoids Java’s rigidity, embracing object-oriented principles with practical tweaks like semicolon-free lines. Yet Java’s self-consistency, seen in its holistic lambda integration, complements Kotlin’s adaptability, creating a synergy where both thrive.

Feature Evolution: From Lambdas to Coroutines

The talk highlighted key milestones. Java’s 2014 release of JDK 8 introduced lambdas, default methods, and type inference, transforming APIs to support functional programming. Kotlin, with 1.0 in 2016, brought smart casts, string templates, and named arguments, prioritizing developer ease. By 2018, Kotlin’s coroutines revolutionized JVM asynchronous programming, offering a simpler mental model than Java’s threads. John praised coroutines as a potential game-changer, though Java’s 2023 virtual threads and structured concurrency aim to close the gap. Kotlin’s multiplatform support, cemented by Google’s 2017 Android endorsement, outpaces Java’s JVM-centric approach, but Java’s predictable six-month release cycle since 2017 ensures steady progress. These advancements reflect a race where each language pushes the other forward.

Mutual Influences: Sealed Classes and Beyond

John emphasized cross-pollination. Java’s 2021 records, inspired by frameworks like Lombok, mirror Kotlin’s data classes, though Kotlin’s named parameters reduce boilerplate further. Sealed classes, introduced in Java 17 and Kotlin 1.5 around 2021, emerged concurrently, suggesting shared inspiration. Kotlin’s string templates, a staple since its early days, influenced Java’s 2024 preview of flexible string templates, which John hopes Kotlin might adopt for localization. Java’s exploration of nullability annotations, potentially aligning with Kotlin’s robust null safety, shows ongoing convergence. John speculated that community demand could push Java toward features like named arguments, though JVM changes remain a hurdle. This mutual learning, fueled by competition with languages like Go and Rust, drives excitement and innovation.

Looking Ahead: Pragmatism and Compatibility

John concluded with a call to action: embrace Kotlin’s compact, readable features while valuing Java’s compile-time speed and ecosystem. Kotlin’s faster feature delivery and multiplatform prowess contrast with Java’s backwards compatibility and predictability. Yet both share a commitment to pragmatic evolution, avoiding breaks in millions of applications. Questions from the audience probed Java’s nullability and virtual threads, with John optimistic about eventual alignment but cautious about timelines. His talk underscored that Java and Kotlin’s competition isn’t zero-sum—it’s a catalyst for better tools, ideas, and developer experiences, ensuring both languages remain vital.

Hashtags: #Java #Kotlin

PostHeaderIcon [AWS Summit Berlin 2023] Go-to-Market with Your Startup: Tips and Best Practices from VC Investors

At AWS Summit Berlin 2023, David Roldán, Head of Startup Business Development for EMEA at AWS, led a 48-minute panel, available on YouTube, featuring VC investors and operators: Constantine, CTO and co-founder of PlanRadar, Jasper, partner at Cherry Ventures, and Gloria, founder of Beyond Capital. This post, targeting startup founders, explores go-to-market (GTM) strategies for B2B SaaS, emphasizing product-segment fit, iterative processes, and avoiding premature scaling in a competitive landscape with over 100,000 independent software vendors.

Defining Product-Segment Fit

Jasper introduced the concept of product-segment fit, arguing it’s more precise than product-market fit for early-stage startups. He emphasized that founders should target a specific customer segment where the product resonates strongly, rather than chasing universal appeal. For example, PlanRadar, serving the construction industry, found success by focusing on old-fashioned outbound sales to reach decision-makers in a niche vertical. Gloria reinforced this, noting that chasing a single “killer feature” often distracts from solving core use cases. Instead, founders should iterate based on customer feedback, ensuring the product delivers immediate value to a well-defined audience, avoiding dilution of focus across disparate segments.

Iterative GTM Strategies

Constantine shared PlanRadar’s journey, highlighting the iterative nature of GTM. With a five-founder team spanning commercial, industry, and tech expertise, PlanRadar prioritized early customer feedback over polished features. He advised launching minimum viable products to test assumptions, even if imperfect, to refine offerings rapidly. Gloria added that data infrastructure, like a well-structured CRM, is critical before Series A to track sales cycles and conversion stages. However, Jasper cautioned against over-rationalizing early GTM with tools like Salesforce, which can burden seed-stage startups. Instead, founders should stay hands-on, engaging directly with customers to build velocity in the sales pipeline.

Avoiding Premature Scaling

Gloria and Constantine stressed the dangers of premature scaling, particularly in hiring. Gloria advised against hiring product managers too early, recommending product engineers who can own the roadmap alongside founders until post-Series A. Constantine echoed this, noting PlanRadar delayed building a product management team until after Series A due to workload and complexity, hiring an ex-founder after a year-long search. Jasper highlighted that premature hires, like sales managers craving predictability, can push startups to scale in the wrong segment, leading to misaligned products. The panel agreed that founders must retain product vision, avoiding delegation to non-founders who lack the same long-term perspective.

Customer Success and Retention

Retention emerged as a key GTM metric, but its priority depends on stage. Gloria argued that early churn is acceptable to refine product-segment fit, but post-product-market fit, net retention becomes critical, reflecting customer love through renewals and upsells. Constantine detailed PlanRadar’s post-Series A customer success team, which segments customers (gold, silver, bronze) using usage data to allocate scarce resources effectively. He noted charging for onboarding, common in Europe, boosts engagement by signaling value. Gloria emphasized three pillars: activation (fast onboarding), engagement (tracking feature usage), and renewals (modularizing products for cross-selling), ensuring startups maximize lifetime value as they scale.

PostHeaderIcon [DevoxxBE 2023] Introducing Flow: The Worst Software Development Approach in History

In a satirical yet insightful closing keynote at Devoxx Belgium 2023, Sander Hoogendoorn and Kim van Wilgen, seasoned software development experts, introduced “Flow,” a fictional methodology designed to expose the absurdities of overly complex software development practices. With humor and sharp critique, Sander and Kim drew from decades of experience to lampoon methodologies like Waterfall, Scrum, SAFe, and Spotify, blending real-world anecdotes with exaggerated principles to highlight what not to do. Their talk, laced with wit, ultimately transitioned to earnest advice, advocating for simplicity, autonomy, and human-centric development. This presentation offers a mirror to the industry, urging developers to critically evaluate methodologies and prioritize effective, enjoyable work.

The Misadventure of Methodologies

Sander kicked off with a historical detour, debunking the myth of Waterfall’s rigidity. Citing Winston Royce’s 1970 paper, he revealed that Waterfall was meant to be iterative, allowing developers to revisit phases—a concept ignored for decades, costing billions. This set the stage for Flow, a methodology born from a tongue-in-cheek desire to maximize project duration for consultancy profits. Kim explained how they cherry-picked the worst elements from existing frameworks: endless sprints from Scrum, gamification to curb autonomy, and an alphabet soup of roles from SAFe.

Their critique was grounded in real-world failures. Sander shared a Belgian project where misestimated sprints and 300 outsourced developers led to chaos, exacerbated by documentation in Dutch and French. Kim highlighted how methodologies like SAFe balloon roles, sidelining customers and adding complexity. By naming Flow with trendy buzzwords—Kaizen, continuous disappointment, and pointless—they mocked the industry’s obsession with jargon over substance.

The Flow Framework: A Recipe for Dysfunction

Flow’s principles, as Sander and Kim outlined, are deliberately counterproductive. Sprints, renamed “mini-Waterfalls,” ensure repeated failures, with burn charts (not burn-down charts) showing growing work without progress. Meetings, dubbed “Flow meetings,” are scheduled to disrupt developers’ focus, with random topics and high-placed interruptions—like a 2.5-meter-tall CEO bursting in. Kim emphasized gamification, stripping teams of real autonomy while offering trivial perks like workspace decoration, exemplified by a ball pit job interview at a Dutch e-commerce firm.

The Flow Manifesto, a parody of the Agile Manifesto, prioritizes “extensive certification over hands-on experience” and “meetings over focus.” Sander recounted a project in France with a 20-column board so confusing that even AI couldn’t decipher its French Post-its. Jira, mandatory in Flow, becomes a tool for obfuscation, with requirements buried in lengthy tickets. Open floor plans and Slack further stifle communication, with “pair slacking” replacing collaboration, ensuring developers remain distracted and disconnected.

Enterprise Flow: Scaling the Absurdity

In large organizations, Flow escalates into the Big Flow Framework (BFF), starting at version 3.0 to sound innovative. Kim critiqued the blind adoption of Spotify’s model, designed for 8x annual growth, which saddles banks with excessive managers—sometimes a 1:1 ratio with developers. Sander recounted a client renaming managers as “tech leads,” adding 118 unnecessary roles to a release train. Certifications, costing €10,000 per recertification, parody the industry’s profit-driven training schemes.

Flow’s tooling, like boards with incomprehensible columns and Jira’s dual Scrum-Kanban confusion, ensures clients remain baffled. Kim highlighted how Enterprise Flow thrives on copying trendy startups like Basecamp, debating irrelevant issues like banning TypeScript or leaving public clouds. Research, they noted, shows no methodology—including SAFe or LeSS—outperforms having none, underscoring Flow’s satirical point: complexity breeds failure.

A Serious Turn: Principles for Better Development

After the laughter, Sander and Kim pivoted to their true beliefs, advocating for a human-centric approach. Software, they stressed, is built by people, not tools or methodologies. Teams should evolve their own practices, using Scrum or Kanban as starting points but adapting to context. Face-to-face communication, trust, and psychological safety are paramount, as red sprints and silencing voices drive talent away.

Focus is sacred, requiring quiet spaces and flexible hours, as ideas often spark outside 9–5. Continuous learning, guarded by dedicating at least one day weekly, prevents stagnation. Autonomy, though initially uncomfortable, empowers teams to make decisions, as Sander’s experience with reluctant developers showed. Flat organizations with minimal hierarchy foster trust, while experienced developers, like those born in the ’60s and ’70s, mentor through code reviews rather than churning out code.

Conclusion: Simplicity and Joy in Development

Sander and Kim’s Flow is a cautionary tale, urging developers to reject bloated methodologies and embrace simplicity. By reducing complexity, as Albert Einstein suggested, teams can deliver value effectively. Above all, they reminded the audience to have fun, celebrating software development as the best industry to be in. Their talk, blending satire with wisdom, inspires developers to craft methodologies that empower people, foster collaboration, and make work enjoyable.

Hashtags: #SoftwareDevelopment #Agile #Flow #Methodologies #DevOps #SanderHoogendoorn #KimVanWilgen #SchubergPhilis #iBOOD #DevoxxBE2023

PostHeaderIcon [DevoxxBE 2023] The Great Divergence: Bridging the Gap Between Industry and University Java

At Devoxx Belgium 2023, Felipe Yanaga, a teaching assistant at the University of North Carolina at Chapel Hill and a Robertson Scholar, delivered a compelling presentation addressing the growing disconnect between the vibrant use of Java in industry and its outdated perception in academia. As a student with internships at Amazon and Google, and a fellow at UNC’s Computer Science Experience Lab, Felipe draws on his unique perspective to highlight how universities lag in teaching modern Java practices. His talk explores the reasons behind this divergence, the negative perceptions students hold about Java, and actionable steps to revitalize its presence in academic settings.

Java’s Strength in Industry

Felipe begins by emphasizing Java’s enduring relevance in the professional world. Far from the “Java is dead” narrative that periodically surfaces online, the language thrives in industry, powered by innovations like Quarkus, GraalVM, and a rapid six-month release cycle. Companies sponsoring Devoxx, such as Red Hat and Oracle, exemplify Java’s robust ecosystem, leveraging frameworks and tools that enhance developer productivity. For instance, Felipe references the keynote by Brian Goetz, which outlined Java’s roadmap, showcasing its adaptability to modern development needs by drawing inspiration from other languages. This continuous evolution ensures Java remains a cornerstone for enterprise applications, from microservices to large-scale systems.

However, Felipe points out a troubling trend: despite its industry strength, Java’s popularity is declining in metrics like GitHub’s language rankings and the TIOBE Index. While JavaScript and Python have surged, Java’s share of relevant Google searches has dropped from 26% in 2002 to under 10% by 2023. Felipe attributes this partly to a shift in academic settings, where the foundation for programming passion is often laid. The disconnect between industry innovation and university curricula is a critical issue that needs addressing to sustain Java’s future.

The Academic Lag: Java’s Outdated Image

In universities, Java’s reputation suffers from outdated teaching practices. Felipe notes that many institutions, including top U.S. universities, have shifted introductory courses from Java to Python, citing Java’s perceived complexity and age. A 2017 quote from a Stanford professor illustrates this sentiment, claiming Java “shows its age” and prompting a move to Python for introductory courses. Surveys of 70 leading U.S. universities confirm this trend, with Python now dominating as the primary teaching language, while Java is relegated to data structures or object-oriented programming courses.

Felipe’s own experience at UNC-Chapel Hill reflects this shift. A decade ago, Java dominated the curriculum, but by 2023, Python had overtaken introductory and database courses. This transition reinforces a perception among students that Java is verbose, bloated, and outdated. Felipe conducted a survey among 181 students in a software engineering course, revealing stark insights: 42% believed Python was in highest industry demand, 67% preferred Python for building REST APIs, and terms like “tedious,” “boring,” and “outdated” dominated a word cloud describing Java. One student even remarked that Java is suitable only for maintaining legacy code, a sentiment that underscores the stigma Felipe aims to dismantle.

The On-Ramp Challenge: Simplifying Java’s Introduction

A significant barrier to Java’s adoption in academia is its steep learning curve for beginners. Felipe contrasts Python’s straightforward “hello world” with Java’s intimidating boilerplate code, such as public static void main. This complexity overwhelms novices, who grapple with concepts like classes and static methods without clear explanations. Instructors often dismiss these as “magic,” which disengages students and fosters a negative perception. Felipe highlights Java’s JEP 445, which introduces unnamed classes and instance main methods to reduce boilerplate, as a promising step to make Java more accessible. By simplifying the initial experience, such innovations could align Java’s on-ramp with Python’s ease, engaging students early and encouraging exploration.

Beyond the language itself, the Java ecosystem poses additional challenges. Installing Java is daunting for beginners, with multiple Oracle websites offering conflicting instructions. Felipe recounts his own struggle as a student, only navigating this thanks to his father’s guidance. Tools like SDKMan and JBang simplify installation and scripting, but these are often unknown to students outside the Java community. Similarly, choosing an IDE—IntelliJ, Eclipse, or VS Code—adds another layer of complexity. Felipe advocates for clear, standardized guidance, such as recommending SDKMan and IntelliJ, to streamline the learning process and make Java’s ecosystem more approachable.

Bridging the Divide: Community and Mentorship

To reverse the declining trend in academia, Felipe proposes actionable steps centered on community engagement. He emphasizes the need for industry professionals to connect with universities, citing examples like Tom from Info Support, who collaborates with local schools to demonstrate Java’s real-world applications. By mentoring students and updating professors on modern tools like Maven, Gradle, and Quarkus, industry can reshape Java’s image. Felipe also encourages inviting students to Java User Groups (JUGs), where they can interact with professionals and discover tools that enhance Java development. These initiatives, he argues, plant seeds of enthusiasm that students will share with peers, amplifying Java’s appeal.

Felipe stresses that small actions, like a 10-minute conversation with a student, can make a significant impact. By demystifying stereotypes—such as Java being slow or bloated—and showcasing frameworks like Quarkus with hot reload capabilities, professionals can counter misconceptions. He also addresses the lack of Java-focused workshops compared to Python and JavaScript, urging the community to actively reach out to students. This collective effort, Felipe believes, is crucial to ensuring the next generation of developers sees Java as a vibrant, modern language, not a relic of the past.

Links:

  • University of North Carolina at Chapel Hill

  • Duke University

Hashtags: #Java #SoftwareDevelopment #Education #Quarkus #GraalVM #UNCChapelHill #DukeUniversity #FelipeYanaga

PostHeaderIcon [KotlinConf’2023] Coroutines and Loom: A Deep Dive into Goals and Implementations

The advent of OpenJDK’s Project Loom and its virtual threads has sparked considerable discussion within the Java and Kotlin communities, particularly regarding its relationship with Kotlin Coroutines. Roman Elizarov, Project Lead for Kotlin at JetBrains, addressed this topic head-on at KotlinConf’23 in his talk, “Coroutines and Loom behind the scenes”. His goal was not just to answer whether Loom would make coroutines obsolete (the answer being a clear “no”), but to delve into the distinct design goals, implementations, and trade-offs of each, clarifying how they can coexist and even complement each other. Information about Project Loom can often be found via OpenJDK resources or articles like those on Baeldung.

Roman began by noting that Project Loom, introducing virtual threads to the JVM, was nearing stability, targeted for Java 21 (late 2023). He emphasized that understanding the goals behind each technology is crucial, as these goals heavily influence their design and optimal use cases.

Project Loom: Simplifying Server-Side Concurrency

Project Loom’s primary design goal, as Roman Elizarov explained, is to preserve the thread-per-request programming style prevalent in many existing Java server-side applications, while dramatically increasing scalability. Traditionally, assigning one platform thread per incoming request becomes a bottleneck due to the high cost of platform threads. Virtual threads aim to solve this by providing lightweight, JVM-managed threads that can run existing synchronous, blocking Java code with minimal or no changes. This allows legacy applications to scale much better without requiring a rewrite to asynchronous or reactive patterns.

Loom achieves this by “unmounting” a virtual thread from its carrier (platform) thread when it encounters a blocking operation (like I/O) that has been integrated with Loom. The carrier thread is then free to run other virtual threads. When the blocking operation completes, the virtual thread is “remounted” on a carrier thread to continue execution. This mechanism is largely transparent to the application code. However, Roman pointed out a potential pitfall: if blocking operations occur within synchronized blocks or native JNI calls that haven’t been adapted for Loom, the carrier thread can get “pinned,” preventing unmounting and potentially negating some of Loom’s benefits in those specific scenarios.

Kotlin Coroutines: Fine-Grained, Structured Concurrency

In contrast, Kotlin Coroutines were designed with different primary goals:

  1. Enable fine-grained concurrency: Allowing developers to easily launch tens of thousands or even millions of concurrent tasks without performance issues, suitable for highly concurrent applications like UI event handling or complex data processing pipelines.
  2. Provide structured concurrency: Ensuring that the lifecycle of coroutines is managed within scopes, simplifying cancellation and preventing resource leaks. This is particularly critical for UI applications where tasks need to be cancelled when UI components are destroyed.

Kotlin Coroutines achieve this through suspendable functions (suspend fun) and a compiler-based transformation. When a coroutine suspends, it doesn’t block its underlying thread; instead, its state is saved, and the thread is released to do other work. This is fundamentally different from Loom’s approach, which aims to make blocking calls non-problematic for virtual threads. Coroutines explicitly distinguish between suspending and non-suspending code, a design choice that enables features like structured concurrency but requires a different programming model than traditional blocking code.

Comparing Trade-offs and Performance

Roman Elizarov presented a detailed comparison:

  • Programming Model: Loom aims for compatibility with existing blocking code. Coroutines introduce a new model with suspend functions, which is more verbose for simple blocking calls but enables powerful features like structured concurrency and explicit cancellation. Forcing blocking calls into a coroutine world requires wrappers like withContext(Dispatchers.IO), while Loom handles blocking calls transparently on virtual threads.
  • Cost of Operations:
    • Launching: Launching a coroutine is significantly cheaper than starting even a virtual thread, as coroutines are lighter weight objects.
    • Yielding/Suspending: Suspending a coroutine is generally cheaper than a virtual thread yielding (unmounting/remounting), due to compiler optimizations in Kotlin for state machine management. Roman showed benchmarks indicating lower memory allocation and faster execution for coroutine suspension compared to virtual thread context switching in preview builds of Loom.
  • Error Handling & Cancellation: Coroutines have built-in, robust support for structured cancellation. Loom’s virtual threads rely on Java’s traditional thread interruption mechanisms, which are less integrated into the programming model for cooperative cancellation.
  • Debugging: Loom’s virtual threads offer a debugging experience very similar to traditional threads, with understandable stack traces. Coroutines, due to their state-machine nature, can sometimes have more complex stack traces, though IDE support has improved this.

Coexistence and Future Synergies

Roman Elizarov concluded that Loom and coroutines are designed for different primary use cases and will coexist effectively.

  • Loom excels for existing Java applications using the thread-per-request model that need to scale without major rewrites.
  • Coroutines excel for applications requiring fine-grained, highly concurrent operations, structured concurrency, and explicit cancellation management, often seen in UI applications or complex backend services with many interacting components.

He also highlighted a potential future synergy: Kotlin Coroutines could leverage Loom’s virtual threads for their Dispatchers.IO (or a similar dispatcher) when running on newer JVMs. This could allow blocking calls within coroutines (those wrapped in withContext(Dispatchers.IO)) to benefit from Loom’s efficient handling of blocking operations, potentially eliminating the need for a large, separate thread pool for I/O-bound tasks in coroutines. This would combine the benefits of both: coroutines for structured, fine-grained concurrency and Loom for efficient handling of any unavoidable blocking calls.

Links:

Hashtags: #Kotlin #Coroutines #ProjectLoom #Java #JVM #Concurrency #AsynchronousProgramming #RomanElizarov #JetBrains

PostHeaderIcon [KotlinConf’23] The Future of Kotlin is Bright and Multiplatform

KotlinConf’23 kicked off with an energizing keynote, marking a highly anticipated return to an in-person format in Amsterdam. Hosted by Hadi Hariri from JetBrains, the session brought together key figures from both JetBrains and Google, including Roman Elizarov, Svetlana Isakova, Egor Tolstoy, and Grace Kloba (VP of Engineering for Android Developer Experience at Google), to share exciting updates and future directions for the Kotlin language and its ecosystem. The conference also boasted a global reach with KotlinConf Global events held across 41 countries. For those unable to attend, the key announcements from the keynote are also available in a comprehensive blog post on the official Kotlin blog.

The keynote began by celebrating Kotlin’s impressive growth, with compelling statistics underscoring its widespread adoption, particularly in Android development where it stands as the most popular language, utilized in over 95% of the top 1000 Android applications. A significant emphasis was placed on the forthcoming Kotlin 2.0, which is centered around the revolutionary new K2 compiler. This compiler promises significant performance improvements, enhanced stability, and a robust foundation for the language’s future evolution. The K2 compiler is nearing completion and is slated for release as Kotlin 2.0. Additionally, the IntelliJ IDEA plugin will also adopt the K2 frontend, ensuring alignment with IntelliJ releases and a consistent developer experience.

The Evolution of Kotlin: K2 Compiler and Language Features

The K2 compiler was a central theme of the keynote, signifying a major milestone for Kotlin. This re-architected compiler frontend, which also powers the IDE, is designed to be faster, more stable, and to enable quicker development of new language features and tooling capabilities. Kotlin 2.0, built upon the K2 compiler, is set to bring these profound benefits to all Kotlin developers, improving both compiler performance and IDE responsiveness.

Beyond the immediate horizon of Kotlin 2.0, the speakers provided a glimpse into potential future language features that are currently under consideration. These exciting prospects included:

Prospective Language Enhancements

  • Static Extensions: This feature aims to allow static resolution of extension functions, which could potentially improve performance and code clarity.
  • Collection Literals: The introduction of a more concise syntax for creating collections, such as using square brackets for lists, with efficient underlying implementations, is on the cards.
  • Name-Based Destructuring: Offering a more flexible way to destructure objects based on property names rather than simply their positional order.
  • Context Receivers: A powerful capability designed to provide contextual information to functions in a more implicit and structured manner. This feature, however, is being approached with careful consideration to ensure it aligns well with Kotlin’s core principles and doesn’t introduce undue complexity.
  • Explicit Fields: This would provide developers with more direct control over the backing fields of properties, offering greater flexibility in certain scenarios.

The JetBrains team underscored a cautious and deliberate approach to language evolution, ensuring that any new features are meticulously designed and maintainable within the Kotlin ecosystem. Compiler plugins were also highlighted as a powerful mechanism for extending Kotlin’s capabilities without altering its core.

Kotlin in the Ecosystem: Google’s Investment and Multiplatform Growth

Grace Kloba from Google took the stage to reiterate Google’s strong and unwavering commitment to Kotlin. She shared insights into Google’s substantial investments in the Kotlin ecosystem, including the development of Kotlin Symbol Processing (KSP) and the continuous emphasis on Kotlin as the default choice for Android development. Google officially championed Kotlin for Android development as early as 2017, a pivotal moment for the language’s widespread adoption. Furthermore, the Kotlin DSL is now the default for Gradle build scripts within Android Studio, significantly enhancing the developer experience with features such as semantic syntax highlighting and advanced code completion. Google also actively contributes to the Kotlin Foundation and encourages community participation through initiatives like the Kotlin Foundation Grants Program, which specifically focuses on supporting multiplatform libraries and frameworks.

Kotlin Multiplatform (KMP) emerged as another major highlight of the keynote, emphasizing its increasing maturity and widespread adoption. The overarching vision for KMP is to empower developers to share code across a diverse range of platforms—Android, iOS, desktop, web, and server-side—while retaining the crucial ability to write platform-specific code when necessary for optimal integration and performance. The keynote celebrated the burgeoning number of multiplatform libraries and tools, including KMM Bridge, which are simplifying KMP development workflows. The future of KMP appears exceptionally promising, with ongoing efforts to further enhance the developer experience and expand its capabilities across even more platforms.

Compose Multiplatform and Emerging Technologies

The keynote also featured significant advancements in Compose Multiplatform, JetBrains’ declarative UI framework for building cross-platform user interfaces. A particularly impactful announcement was the alpha release of Compose Multiplatform for iOS. This groundbreaking development allows developers to write their UI code once in Kotlin and deploy it seamlessly across Android and iOS, and even to desktop and web targets. This opens up entirely new avenues for code sharing and promises accelerated development cycles for mobile applications, breaking down traditional platform barriers.

Finally, the JetBrains team touched upon Kotlin’s expansion into truly emerging technologies, such as WebAssembly (Wasm). JetBrains is actively developing a new compiler backend for Kotlin specifically targeting WebAssembly, coupled with its own garbage collection proposal. This ambitious effort aims to deliver high-performance Kotlin code directly within the browser environment. Experiments involving the execution of Compose applications within the browser using WebAssembly were also mentioned, hinting at a future where Kotlin could offer a unified development experience across an even broader spectrum of platforms. The keynote concluded with an enthusiastic invitation to the community to delve deeper into these subjects during the conference sessions and to continue contributing to Kotlin’s vibrant and ever-expanding ecosystem.

Hashtags: #Keynote #JetBrains #Google #K2Compiler #Kotlin2 #Multiplatform #ComposeMultiplatform #WebAssembly