Recent Posts
Archives

Posts Tagged ‘SoftwareDevelopment’

PostHeaderIcon [NDCMelbourne2025] TDD & DDD from the Ground Up – Chris Simon

Chris Simon, a seasoned developer and co-organizer of Domain-Driven Design Australia, presents a compelling live-coding session at NDC Melbourne 2025, demonstrating how Test-Driven Development (TDD) and Domain-Driven Design (DDD) can create maintainable, scalable software. Through a university enrollment system example, Chris illustrates how TDD’s iterative red-green-refactor cycle and DDD’s focus on ubiquitous language and domain modeling can evolve a simple CRUD application into a robust solution. His approach highlights the power of combining these methodologies to adapt to changing requirements without compromising code quality.

Starting with TDD: The Red-Green-Refactor Cycle

Chris kicks off by introducing TDD’s core phases: writing a failing test (red), making it pass with minimal code (green), and refactoring to improve structure. Using a .NET-based university enrollment system, he begins with a basic test to register a student, ensuring a created status response. Each step is deliberately small, balancing test and implementation to minimize risk. This disciplined approach, Chris explains, builds a safety net of tests, allowing confident code evolution as complexity increases.

Incorporating DDD: Ubiquitous Language and Domain Logic

As the system grows, Chris introduces DDD principles, particularly the concept of ubiquitous language. He renames methods to reflect business intent, such as “register” instead of “create” for students, and uses a static factory method to encapsulate logic. His IDE extension, Contextive, further supports this by providing domain term definitions across languages, ensuring consistency. By moving validation logic, like checking room availability, into domain models, Chris ensures business rules are encapsulated, reducing controller complexity and enhancing maintainability.

Handling Complexity: Refactoring for Scalability

As requirements evolve, such as preventing course over-enrollment, Chris encounters a race condition in the initial implementation. He demonstrates how TDD’s tests catch this issue, allowing safe refactoring. Through event storming, he rethinks the domain model, delaying room allocation until course popularity is known. This shift, informed by domain expert collaboration, optimizes resource utilization and eliminates unnecessary constraints, showcasing DDD’s ability to align code with business needs.

Balancing Testing Strategies

Chris explores the trade-offs between API-level and unit-level testing. While API tests protect the public contract, unit tests for complex scheduling algorithms allow faster, more efficient test setup. By testing a scheduler that matches courses to rooms based on enrollment counts, he ensures robust logic without overcomplicating API tests. This strategic balance, he argues, maintains refactorability while addressing intricate business rules, a key takeaway for developers navigating complex domains.

Adapting to Change with Confidence

The session culminates in a significant refactor, removing the over-enrollment check after realizing it’s applied at the wrong stage. Chris’s tests provide the confidence to make this change, ensuring no unintended regressions. By making domain model setters private, he confirms the system adheres to DDD principles, encapsulating business logic effectively. This adaptability, driven by TDD and DDD, underscores the value of iterative development and domain collaboration in building resilient software.

Links:

PostHeaderIcon [KotlinConf2025] Blueprints for Scale: What AWS Learned Building a Massive Multiplatform Project

Ian Botsford and Matis Lazdins from Amazon Web Services (AWS) shared their experiences and insights from developing the AWS SDK for Kotlin, a truly massive multiplatform project. This session provided a practical blueprint for managing the complexities of a large-scale Kotlin Multiplatform (KMP) project, offering firsthand lessons on design, development, and scaling. The speakers detailed the strategies they adopted to maintain sanity while dealing with a codebase that spans over 300 services and targets eight distinct platforms.

Architectural and Development Strategies

Botsford and Lazdins began by breaking down the project’s immense scale, explaining that it is distributed across four different repositories and consists of nearly 500 Gradle projects. They emphasized the importance of a well-defined project structure and the strategic use of Gradle to manage dependencies and build processes. A key lesson they shared was the necessity of designing for Kotlin Multiplatform from the very beginning, rather than attempting to retrofit it later. They also highlighted the critical role of maintaining backward compatibility, a practice that is essential for a project with such a large user base. The speakers explained the various design trade-offs they had to make and how these decisions ultimately shaped the project’s architecture and long-term sustainability.

The Maintainer Experience

The discussion moved beyond technical architecture to focus on the human element of maintaining such a vast project. Lazdins spoke about the importance of automating repetitive and mundane processes to free up maintainers’ time for more complex tasks. He detailed the implementation of broad checks to catch issues before they are merged, a proactive approach that prevents regressions and ensures code quality. These checks are designed to be highly informative while remaining overridable, giving developers the autonomy to make informed decisions. The presenters stressed that a positive maintainer experience is crucial for the health of any large open-source project, as it encourages contributions and fosters a collaborative environment.

Lessons for the Community

In their concluding remarks, Botsford and Lazdins offered a summary of the most valuable lessons they learned. They reiterated the importance of owning your own dependencies, structuring projects for scale, and designing for KMP from the outset. By sharing their experiences with a real-world, large-scale project, they provided the Kotlin community with actionable insights that can be applied to projects of any size. The session served as a powerful testament to the capabilities of Kotlin Multiplatform and the importance of a thoughtful, strategic approach to software development at scale.

Links:

PostHeaderIcon [KotlinConf2025] Build your Kotlin and Android apps with Buck2

Sergei Rybalkin, a software engineer from Meta, introduced the audience to Buck2, an open-source build system that is now equipped to support Kotlin and Android applications. In a concise and informative presentation, Rybalkin detailed how Buck2, a successor to the original Buck system, addresses the need for a fast and scalable build solution, particularly for large-scale projects like those at Meta. The talk centered on Buck2’s core principles and its capabilities for accelerating development cycles and ensuring consistent, reliable builds.

The Power of a Scalable Build System

Rybalkin began by outlining the motivation behind Buck2. He explained that as projects grow in size and complexity, traditional build systems often struggle to keep up, leading to slow incremental iterations and hindered developer productivity. Buck2 was designed to overcome these challenges by focusing on key areas such as parallelism and a highly opinionated approach to optimization. The talk revealed that Buck2’s architecture allows it to execute build tasks with remarkable efficiency, a crucial factor for Meta’s own internal development processes. Rybalkin also touched on advanced capabilities like Remote Execution and the Build Tools API, which further enhance the system’s performance and flexibility.

A Blueprint for Optimization

The presentation also shed light on Buck2’s philosophy of “opinionated optimization.” Rybalkin clarified that this means the system takes a firm stance on how things should be done to achieve the best results. For example, if a particular feature or integration does not perform well, the Buck2 team may choose to drop support for it entirely, rather than provide a subpar experience. This selective approach ensures that the build system remains fast and reliable, even as it handles a multitude of dependencies and complex configurations. Rybalkin underscored the fact that the open-source version of Buck2 is almost identical to the internal solution used at Meta, offering the community the same powerful tools and optimizations that drive one of the world’s largest development teams. He concluded by encouraging the audience to try Buck2 and provide feedback, underscoring the collaborative nature of open-source development.

Links:

PostHeaderIcon [NDCMelbourne2025] DIY Usability Testing When You Have No Time and No Budget – Bekah Rice

In an insightful presentation at NDC Melbourne 2025, Bekah Rice, a UX consultant from True Matter, delivers a practical guide to conducting usability testing without the luxury of extensive time or financial resources. Drawing from her experience at a South Carolina-based UX consultancy, Bekah outlines an eight-step process to gather meaningful qualitative data, enabling developers and designers to refine digital products effectively. Her approach, demonstrated through a live usability test, underscores the importance of observing real user interactions to uncover design flaws and enhance user experience, even with minimal resources.

Step One: Preparing the Test Material

Bekah begins by emphasizing the need for a testable artifact, which need not be a fully developed product. A simple sketch, paper prototype, or a digital mockup created in tools like Figma can suffice. The key is to ensure the prototype provides enough context to mimic real-world usage. For instance, Bekah shares her plan to test a 12-year-old hospital website, currently undergoing a redesign, to identify usability issues. This approach allows teams to evaluate user interactions early, even before development begins, ensuring the product aligns with user needs from the outset.

Crafting Effective Tasks

The second step involves designing realistic tasks that reflect the user’s typical interactions with the product. Bekah illustrates this with a scenario for the hospital website, where users are asked to make an appointment with a doctor for regular care after moving to a new town. By phrasing tasks as open-ended questions and avoiding UI-specific terminology, she ensures users are not inadvertently guided toward specific actions. This method, she explains, reveals genuine user behavior, including potential failures, which are critical for identifying design shortcomings.

Recruiting the Right Participants

Finding suitable testers is crucial, and Bekah advocates for a pragmatic approach when resources are scarce. Instead of recruiting strangers, she suggests leveraging colleagues from unrelated departments, friends, or family members who are unfamiliar with the product. For the hospital website test, she selects Adam, a 39-year-old artist and warehouse tech, as a representative user. Bekah warns against testing with stakeholders or developers, as their biases can skew results. Offering small incentives, like coffee or lunch, can encourage participation, making the process feasible even on a tight budget.

Setting Up and Conducting the Test

Creating a comfortable testing environment and using minimal equipment are central to Bekah’s DIY approach. A quiet space, such as a conference room or a coffee shop, can replicate the user’s typical context. During the live demo, Bekah uses Adam’s iPhone to conduct the test, highlighting that borrowed devices can work if they allow observation. She also stresses the importance of a note-taking “sidekick” to record patterns and insights, which proved invaluable when Adam repeatedly missed critical UI elements, revealing design flaws like unclear button labels and missing appointment options.

Analyzing and Reporting Findings

The final step involves translating observations into actionable insights. Bekah emphasizes documenting both successes and failures, as seen when Adam struggled with the hospital website’s navigation but eventually found a phone number as a fallback. Immediate reporting to the team ensures fresh insights drive improvements, such as adding a map to the interface or renaming buttons for clarity. By presenting findings in simple bullet lists or visually appealing reports, teams can effectively communicate changes to stakeholders, ensuring the product evolves to meet user needs.

Links:

PostHeaderIcon [KotlinConf2025] LangChain4j with Quarkus

In a collaboration between Red Hat and Twilio, Max Rydahl Andersen and Konstantin Pavlov presented an illuminating session on the powerful combination of LangChain4j and Quarkus for building AI-driven applications with Kotlin. The talk addressed the burgeoning demand for integrating artificial intelligence into modern software and the common difficulties developers encounter, such as complex setups and performance bottlenecks. By merging Kotlin’s expressive power, Quarkus’s rapid runtime, and LangChain4j’s AI capabilities, the presenters demonstrated a streamlined and effective solution for creating cutting-edge applications.

A Synergistic Approach to AI Integration

The core of the session focused on the seamless synergy between the three technologies. Andersen and Pavlov detailed how Kotlin’s idiomatic features simplify the development of AI workflows. They presented a compelling case for using LangChain4j, a versatile framework for building language model-based applications, within the Quarkus ecosystem. Quarkus, with its fast startup times and low memory footprint, proved to be an ideal runtime for these resource-intensive applications. The presenters walked through practical code samples, illustrating how to set up the environment, manage dependencies, and orchestrate AI tools efficiently. They emphasized that this integrated approach significantly reduces the friction typically associated with AI development, allowing engineers to focus on business logic rather than infrastructural challenges.

Enhancing Performance and Productivity

The talk also addressed the critical aspect of performance. The presenters demonstrated how the combination of LangChain4j and Quarkus enables the creation of high-performing, AI-powered applications. They discussed the importance of leveraging Quarkus’s native compilation capabilities, which can lead to dramatic improvements in startup time and resource utilization. Additionally, they touched on the ongoing work to optimize the Kotlin compiler’s interaction with the Quarkus build system. Andersen noted that while the current process is efficient, there are continuous efforts to further reduce build times and enhance developer productivity. This commitment to performance underscores the value of this tech stack for developers who need to build scalable and responsive AI solutions.

The Path Forward

Looking ahead, Andersen and Pavlov outlined the future roadmap for LangChain4j and its integration with Quarkus. They highlighted upcoming features, such as the native asynchronous API, which will provide enhanced support for Kotlin coroutines. While acknowledging the importance of coroutines for certain use cases, they also reminded the audience that traditional blocking and virtual threads remain perfectly viable and often preferred for a majority of applications. They also extended an open invitation to the community to contribute to the project, emphasizing that the development of these tools is a collaborative effort. The session concluded with a powerful message: this technology stack is not just about building applications; it’s about empowering developers to confidently tackle the next generation of AI-driven projects.

Links:

PostHeaderIcon [KotlinConf2025] Closing Panel

The concluding panel of KotlinConf2025 offered a vibrant and candid discussion, serving as the capstone to the conference. The diverse group of experts from JetBrains, Netflix, and Google engaged in a wide-ranging dialogue, reflecting on the state of Kotlin, its evolution, and the path forward. They provided a unique blend of perspectives, from language design and backend development to mobile application architecture and developer experience. The conversation was an unfiltered look into the challenges and opportunities facing the Kotlin community, touching on everything from compiler performance to the future of multiplatform development.

The Language and its Future

A central theme of the discussion was the ongoing development of the Kotlin language itself. The panel members, including Simon from the K2 compiler team and Michael from language design, shared insights into the rigorous process of evolving Kotlin. They addressed questions about new language features and the careful balance between adding functionality and maintaining simplicity. A notable point of contention and discussion was the topic of coroutines and the broader asynchronous programming landscape. The experts debated the best practices for managing concurrency and how Kotlin’s native features are designed to simplify these complex tasks. There was a consensus that while new features are exciting, the primary focus remains on stability, performance, and enhancing the developer experience.

The State of Multiplatform Development

The conversation naturally shifted to Kotlin Multiplatform (KMP), which has become a cornerstone of the Kotlin ecosystem. The panelists explored the challenges and successes of building applications that run seamlessly across different platforms. Representatives from companies like Netflix and AWS, who are using KMP for large-scale projects, shared their experiences. They discussed the complexities of managing shared codebases, ensuring consistent performance, and maintaining a robust build system. The experts emphasized that while KMP offers immense benefits in terms of code reuse, it also requires a thoughtful approach to architecture and toolchain management. The panel concluded that KMP is a powerful tool, but its success depends on careful planning and a deep understanding of the underlying platforms.

Community and Ecosystem

Beyond the technical discussions, the panel also reflected on the health and vibrancy of the Kotlin community. A developer advocate, SA, and others spoke about the importance of fostering an inclusive environment and the role of the community in shaping the language. They highlighted the value of feedback from developers and the critical role it plays in guiding the direction of the language and its tooling. The discussion also touched upon the broader ecosystem, including the various libraries and frameworks that have emerged to support Kotlin development. The panel’s enthusiasm for the community was palpable, and they expressed optimism about Kotlin’s continued growth and adoption in the years to come.

Links:

PostHeaderIcon [DevoxxFR2025] Side Roads: Info Prof – An Experience Report

After years navigating the corporate landscape, particularly within the often-demanding environment of SSII (Systèmes et Services d’Information et d’Ingénierie) or IT consulting companies, many professionals reach a point of questioning their career path. Seeking a different kind of fulfillment or a better alignment with personal values, some choose to take a “side road” – a deliberate shift in their professional journey. Jerome BATON shared his personal experience taking such a path: transitioning from the world of IT services to becoming an IT professor. His presentation offered a candid look at this career change, exploring the motivations behind it, the realities of teaching, and why the next generation of IT professionals needs the experience and passion of those who have worked in the field.

The Turning Point: Seeking a Different Path

Jerome began by describing the feeling of reaching a turning point in his career within the SSII environment. While such roles offer valuable experience and exposure to diverse projects, they can also involve long hours, constant pressure, and a focus on deliverables that sometimes overshadow personal growth or the opportunity to share knowledge more broadly. He articulated the motivations that led him to consider a change, such as a desire for a better work-life balance, a need for a stronger sense of purpose, or a calling to contribute to the development of future talent. The idea of taking a “side road” suggests a deviation from a conventional linear career progression, a conscious choice to explore an alternative path that aligns better with personal aspirations.

The Reality of Being an Info Prof

Becoming an IT professor involves a different set of challenges and rewards compared to working in the industry. Jerome shared his experience in this new role, discussing the realities of teaching computer science or related subjects. This included aspects like curriculum development, preparing and delivering lectures and practical sessions, evaluating student progress, and engaging with the academic environment. He touched upon the satisfaction of sharing his industry knowledge and experience with students, guiding their learning, and witnessing their growth. However, he might also have discussed the administrative aspects, the need to stay updated with rapidly evolving technologies to keep course content relevant, and the unique dynamics of working within an educational institution.

Why the Next Generation Needs Your Experience

A central message of Jerome’s presentation was the crucial role that experienced IT professionals can play in shaping the next generation. He argued that students benefit immensely from learning from individuals who have real-world experience, who understand the practical challenges of software development, and who can share insights beyond theoretical concepts. Industry professionals can provide valuable context, mentorship, and guidance, preparing students not just with technical skills but also with an understanding of industry best practices, teamwork, and problem-solving in real-world scenarios. Jerome’s own transition exemplified this, demonstrating how years of experience in IT services could be directly applied and leveraged in an educational setting to benefit aspiring developers. The talk served as a call to action, encouraging other experienced professionals to consider teaching or mentoring as a way to give back to the community and influence the future of the IT industry.

Links:

PostHeaderIcon [NDCMelbourne2025] Preventing Emu Wars with Domain-Driven Design – Lee Dunkley

In an engaging and humorous presentation at NDC Melbourne 2025, Lee Dunkley explores how Domain-Driven Design (DDD) can prevent software projects from spiraling into chaotic, unmaintainable codebases—likening such failures to Australia’s infamous Emu War. By drawing parallels between historical missteps and common software development pitfalls, Lee illustrates how DDD practices, such as event storming and ubiquitous language, can steer teams toward solving the right problems, thereby enhancing maintainability and extensibility.

The Emu War: A Cautionary Tale for Coders

Lee begins with a whimsical analogy, recounting Australia’s 1930s Emu War, where soldiers armed with machine guns failed to curb an overwhelming emu population devastating crops. The emus’ agility and sheer numbers outmatched the military’s efforts, leading to a humbling defeat. Lee cleverly translates this to software development, where throwing endless code at a problem—akin to deploying infinite soldiers—often results in a complex, bug-ridden system. This sets the stage for his argument: without proper problem definition, developers risk creating their own unmanageable “emu wars.”

He illustrates this with a hypothetical coding scenario where a client demands a solution to “kill all the pesky emus.” Developers might churn out classes and methods, only to face mounting complexity and bugs, such as emus “upgrading to T-Rexes.” The lesson? Simply writing more code doesn’t address the root issue, much like the Emu War’s flawed strategy failed to protect farmers’ crops.

Modeling Smells in E-Commerce

Transitioning to a more practical domain, Lee applies the Emu War analogy to an e-commerce platform tasked with implementing an “update order” feature. Initially, the solution seems straightforward: create an endpoint to modify orders. However, as Lee demonstrates, this leads to bugs like customers receiving too many items, being undercharged, or getting empty boxes. These issues arise because the vague “update order” requirement invites a cascade of edge cases and race conditions.

By examining the system’s event timeline, Lee highlights how an “order updated” event disrupts critical processes like payment capture and stock reservation. This modeling smell—where a generic action undermines system integrity—mirrors the Emu War’s misaligned objectives. The real problem, Lee argues, lies in failing to define the business’s true needs, resulting in a codebase that’s hard to test and extend.

Refining with Domain-Driven Design

Here, Lee introduces DDD as a remedy, emphasizing techniques like event storming and the five whys to uncover the true problem space. Revisiting the Emu War, he applies the five whys to reveal that the goal wasn’t to kill emus but to secure employment for returning soldiers. Similarly, in the e-commerce case, the “update order” request masks specific needs: ensuring shoppers receive only desired items, adding forgotten items, and canceling orders.

By reframing these needs, Lee proposes targeted solutions, such as a “supplementary order” endpoint for adding items and a time-bound “order received” event to allow cancellations without disrupting the system. These solutions, rooted in DDD’s ubiquitous language, reduce complexity by aligning the code with business intent, avoiding the pitfalls of generic actions like “update.”

Simplicity Through Abstraction

Lee challenges the notion that complex problems demand complex solutions. Through DDD, he shows how elevating the level of abstraction—by focusing on precise business goals—eliminates unnecessary complexity. In the e-commerce example, replacing the problematic “update order” endpoint with simpler, purpose-specific endpoints demonstrates how DDD fosters maintainable, extensible code.

He acknowledges the challenges of implementing such changes in live systems, where breaking changes can be daunting. However, Lee argues that aligning solutions with the problem space is worth the effort, as it prevents the codebase from becoming a “Frankenstein’s monster” burdened by accidental complexity.

Conclusion: Avoiding Your Own Emu War

Lee wraps up by urging developers to wield their coding “superpower” wisely. Instead of burying problems under an avalanche of code, he advocates for DDD practices to ensure solutions reflect the business’s true needs. By employing event storming, refining ubiquitous language, and questioning requirements with the five whys, developers can avoid fighting futile, unmaintainable battles.

This talk serves as a compelling reminder that thoughtful problem definition is the cornerstone of effective software development. Lee’s blend of humor and practical insights makes a strong case for embracing DDD to create robust, adaptable systems.

PostHeaderIcon [DevoxxFR2025] Dagger Modules: A Swiss Army Knife for Modern CI/CD Pipelines

Continuous Integration and Continuous Delivery (CI/CD) pipelines are the backbone of modern software development, automating the process of building, testing, and deploying applications. However, as these pipelines grow in complexity, they often become difficult to maintain, debug, and port across different execution platforms, frequently relying on verbose and platform-specific YAML configurations. Jean-Christophe Sirot, in his presentation, introduced Dagger as a revolutionary approach to CI/CD, allowing pipelines to be written as code, executable locally, testable, and portable. He explored Dagger Functions and Dagger Modules as key concepts for creating and sharing reusable, language-agnostic components for CI/CD workflows, positioning Dagger as a versatile “Swiss Army knife” for modernizing these critical pipelines.

The Pain Points of Traditional CI/CD

Jean-Christophe began by outlining the common frustrations associated with traditional CI/CD pipelines. Relying heavily on YAML or other declarative formats for defining pipelines can lead to complex, repetitive, and hard-to-read configurations, especially for intricate workflows. Debugging failures within these pipelines is often challenging, requiring pushing changes to a remote CI server and waiting for the pipeline to run. Furthermore, pipelines written for one CI platform (like GitHub Actions or GitLab CI) are often not easily transferable to another, creating vendor lock-in and hindering flexibility. This dependency on specific platforms and the difficulty in managing complex workflows manually are significant pain points for development and DevOps teams.

Dagger: CI/CD as Code

Dagger offers a fundamentally different approach by treating CI/CD pipelines as code. It allows developers to write their pipeline logic using familiar programming languages (like Go, Python, Java, or TypeScript) instead of platform-specific configuration languages. This brings the benefits of software development practices – such as code reusability, modularity, testing, and versioning – to CI/CD. Jean-Christophe explained that Dagger executes these pipelines using containers, ensuring consistency and portability across different environments. The Dagger engine runs the pipeline logic, orchestrates the necessary container operations, and manages dependencies. This allows developers to run and debug their CI/CD pipelines locally using the same code that will execute on the remote CI platform, significantly accelerating the debugging cycle.

Dagger Functions and Modules

Key to Dagger’s power are Dagger Functions and Dagger Modules. Jean-Christophe described Dagger Functions as the basic building blocks of a pipeline – functions written in a programming language that perform specific CI/CD tasks (e.g., building a Docker image, running tests, deploying an application). These functions interact with the Dagger engine to perform container operations. Dagger Modules are collections of related Dagger Functions that can be packaged and shared. Modules allow teams to create reusable components for common CI/CD patterns or specific technologies, effectively creating a library of CI/CD capabilities. For example, a team could create a “Java Build Module” containing functions for compiling Java code, running Maven or Gradle tasks, and building JAR or WAR files. These modules can be easily imported and used in different projects, promoting standardization and reducing duplication across an organization’s CI/CD workflows. Jean-Christophe demonstrated how to create and use Dagger Modules, illustrating their potential for building composable and maintainable pipelines. He highlighted that Dagger’s language independence means that modules can be written in one language (e.g., Python) and used in a pipeline defined in another (e.g., Java), fostering collaboration between teams with different language preferences.

The Benefits: Composable, Maintainable, Portable

By adopting Dagger, teams can create CI/CD pipelines that are:
Composable: Pipelines can be built by combining smaller, reusable Dagger Modules and Functions.
Maintainable: Pipelines written as code are easier to read, understand, and refactor using standard development tools and practices.
Portable: Pipelines can run on any platform that supports Dagger and containers, eliminating vendor lock-in.
Testable: Individual Dagger Functions and modules can be unit tested, and the entire pipeline can be run and debugged locally.

Jean-Christophe’s presentation positioned Dagger as a versatile tool that modernizes CI/CD by bringing the best practices of software development to pipeline automation. The ability to write pipelines in code, leverage reusable modules, and execute locally makes Dagger a powerful “Swiss Army knife” for developers and DevOps engineers seeking more efficient, reliable, and maintainable CI/CD workflows.

Links:

PostHeaderIcon [DevoxxFR2025] Go Without Frills: When the Standard Library Suffices

Go, the programming language designed by Google, has gained significant popularity for its simplicity, efficiency, and strong support for concurrent programming. A core philosophy of Go is its minimalist design and emphasis on a robust standard library, encouraging developers to “do a lot with a little.” Nathan Castelein, in his presentation, championed this philosophy, demonstrating how a significant portion of modern applications can be built effectively using only Go’s standard library, without resorting to numerous third-party dependencies. He explored various native packages and compared their functionalities to well-known third-party alternatives, showcasing why and how returning to the fundamentals can lead to simpler, more maintainable, and often equally performant Go applications.

The Go Standard Library: A Powerful Foundation

Nathan highlighted the richness and capability of Go’s standard library. Unlike some languages where the standard library is minimal, Go provides a comprehensive set of packages covering a wide range of functionalities, from networking and HTTP to encoding/decoding, cryptography, and testing. He emphasized that these standard packages are well-designed, thoroughly tested, and actively maintained, making them a reliable choice for building production-ready applications. Focusing on the standard library reduces the number of external dependencies, which simplifies project management, minimizes potential security vulnerabilities introduced by third-party code, and avoids the complexities of managing version conflicts. It also encourages developers to gain a deeper understanding of the language’s built-in capabilities.

Comparing Standard Packages to Third-Party Libraries

The core of Nathan’s talk involved comparing functionalities provided by standard Go packages with those offered by popular third-party libraries. He showcased examples in areas such as:
Web Development: Demonstrating how to build web servers and handle HTTP requests using the net/http package, contrasting it with frameworks like Gin, Echo, or Fiber. He would have shown that for many common web tasks, the standard library provides sufficient features.
Logging: Illustrating the capabilities of the log/slog package (introduced in Go 1.21) for structured logging, comparing it to libraries like Logrus or Zerolog. He would have highlighted how log/slog provides modern logging features natively.
Testing: Exploring the testing package for writing unit and integration tests, perhaps mentioning how it can be used effectively without resorting to assertion libraries like Testify for many common assertion scenarios.

The comparison aimed to show that while third-party libraries often provide convenience or specialized features, the standard library has evolved to incorporate many commonly needed functionalities, often in a simpler and more idiomatic Go way.

The Benefits of a Minimalist Approach

Nathan articulated the benefits of embracing a “Go without frills” approach. Using the standard library more extensively leads to:
Reduced Complexity: Fewer dependencies mean a simpler project structure and fewer moving parts to understand and manage.
Improved Maintainability: Code relying on standard libraries is often easier to maintain over time, as the dependencies are stable and well-documented.
Enhanced Performance: Standard library implementations are often highly optimized and integrated with the Go runtime.
Faster Compilation: Fewer dependencies can lead to quicker build times.
Smaller Binaries: Avoiding large third-party libraries can result in smaller executable files.

He acknowledged that there are still valid use cases for third-party libraries, especially for highly specialized tasks or when a library provides significant productivity gains. However, the key takeaway was to evaluate the necessity of adding a dependency and to leverage the powerful standard library whenever it suffices. The talk encouraged developers to revisit the fundamentals and appreciate the elegance and capability of Go’s built-in tools for building robust and efficient applications.

Links: