Recent Posts
Archives

Posts Tagged ‘SoftwareArchitecture’

PostHeaderIcon [DevoxxBE2025] Your Code Base as a Crime Scene

Lecturer

Scott Sosna is a seasoned technologist with diverse roles in software architecture and backend development. Currently an individual contributor at a SaaS firm, he mentors emerging engineers and authors on code quality and organizational dynamics.

Abstract

This discourse analogizes codebases to crime scenes, identifying organizational triggers for quality degradation such as misaligned incentives, political maneuvers, and procedural lapses. Contextualized within career progression, it analyzes methodologies for self-protection, ally cultivation, and continuous improvement. Through anecdotal examinations of common pitfalls, the narrative evaluates implications for maintainability, team morale, and professional resilience, advocating proactive strategies in dysfunctional environments.

Organizational Triggers and Code Degradation

Codebases often devolve due to systemic issues rather than individual failings, akin to unsolved mysteries where clues point to broader culprits. Sales commitments override engineering feasibility, imposing unrealistic timelines that foster shortcuts. In one anecdote, promised features without consultation led to hastily patched legacy systems, birthing unmaintainable hybrids.

Politics exacerbate this: non-technical leaders dictate architectures, as when a director mandated a shift to NoSQL sans rationale, yielding mismatched solutions. Procedural gaps, like absent reviews, allow unchecked merges, propagating errors. Contextualized, these stem from misaligned incentives—sales bonuses prioritize deals over sustainability, while engineers bear long-term burdens.

Implications include accrued technical debt, manifesting as fragile systems prone to outages. Analysis reveals patterns: unchecked merges correlate with higher defect rates, underscoring review necessities.

Interpersonal Dynamics and Blame Cultures

Blame cultures stifle innovation, where finger-pointing overshadows resolution. Anecdotes illustrate managers evading accountability, redirecting faults to teams. This erodes trust, prompting defensive coding over optimal solutions.

Methodologically, fostering psychological safety counters this: encouraging open post-mortems focuses on processes, not persons. In dysfunctional settings, documentation becomes armor—recording decisions shields against retroactive critiques.

Implications affect morale: persistent blame accelerates burnout, increasing turnover. Analysis suggests ally networks mitigate this, amplifying voices in adversarial environments.

Strategies for Professional Resilience

Resilience demands proactive measures: continual self-improvement via external learning equips engineers for advocacy. Cultivating allies—trusted colleagues who endorse approaches—extends influence, socializing best practices.

Experience tempers reactions: seasoned professionals discern battles, conserving energy for impactful changes. Exit strategies, whether role shifts or departures, preserve well-being when reforms falter.

Implications foster longevity: adaptive engineers thrive, contributing sustainably. Analysis emphasizes balance—technical excellence paired with soft skills navigates organizational complexities.

Pathways to Improvement and Exit Considerations

Improvement pathways include feedback loops: rating systems in tools like conference apps aggregate insights, informing enhancements. External perspectives, like articles on engineering misconceptions, offer fresh viewpoints.

When irreconcilable, exits—internal or external—rejuvenate careers. Market challenges notwithstanding, skill diversification bolsters options.

In conclusion, viewing codebases as crime scenes unveils systemic flaws, empowering engineers with strategies for navigation and reform, ensuring professional fulfillment amid adversities.

Links:

  • Lecture video: https://www.youtube.com/watch?v=-iKd__Lzt7w
  • Scott Sosna on LinkedIn: https://www.linkedin.com/in/scott-sosna-839b4a1/

PostHeaderIcon [DevoxxUK2024] How We Decide by Andrew Harmel-Law

Andrew Harmel-Law, a Tech Principal at Thoughtworks, delivered a profound session at DevoxxUK2024, dissecting the art and science of decision-making in software development. Drawing from his experience as a consultant and his work on a forthcoming book about software architecture, Andrew argues that decisions, both conscious and unconscious, form the backbone of software systems. His talk explores various decision-making approaches, their implications for modern, decentralized teams, and introduces the advice process as a novel framework for balancing speed, decentralization, and accountability.

The Anatomy of Decision-Making

Andrew begins by framing software architecture as the cumulative result of myriad decisions, from coding minutiae to strategic architectural choices. He introduces a refined model of decision-making comprising three stages: option making, decision taking, and decision sharing. Option making involves generating possible solutions, drawing on patterns, stakeholder needs, and past experiences. Decision taking, often the most scrutinized phase, requires selecting one option, inherently rejecting others, which Andrew describes as a “wicked problem” due to its complexity and lack of a perfect solution. Decision sharing ensures effective communication to implementers, a step frequently fumbled when architects and developers are disconnected.

Centralized Decision-Making Approaches

Andrew outlines three centralized decision-making models: autocratic, delegated, and consultative. In the autocratic approach, a single individual—often a chief architect—handles all stages, enabling rapid decisions but risking bottlenecks and poor sharing. Delegation involves the autocrat assigning decision-making to others, potentially improving outcomes by leveraging specialized expertise, though it remains centralized. The consultative approach sees the decision-maker seeking input from others but retaining ultimate authority, which can enhance decision quality but slows the process. Andrew emphasizes that while these methods can be swift, they concentrate power, limiting scalability in large organizations.

Decentralized Decision-Making Models

Transitioning to decentralized approaches, Andrew discusses consent, democratic, and consensus models. The consent model allows a single decision-maker to propose options, subject to veto by affected parties, shifting some power outward but risking gridlock. The democratic model, akin to Athenian direct democracy, involves voting on options, reducing the veto power of individuals but potentially marginalizing minority concerns. Consensus seeks universal agreement, maximizing inclusion but often stalling due to the pursuit of perfection. Andrew notes that decentralized models distribute power more widely, enhancing collaboration but sacrificing speed, particularly in consensus-driven processes.

The Advice Process: A Balanced Approach

To address the trade-offs between speed and decentralization, Andrew introduces the advice process, a framework where anyone can initiate and make decisions, provided they seek advice from affected parties and experts. Unlike permission, advice is non-binding, preserving the decision-maker’s autonomy while fostering trust and collaboration. This approach aligns with modern autonomous teams, allowing decisions to emerge organically without relying on a fixed authority. Andrew cites the Open Agile Architecture Framework, which supports this model by emphasizing documented accountability, such as through Architecture Decision Records (ADRs). The advice process minimizes unnecessary sharing, ensuring efficiency while empowering teams.

Navigating Power and Accountability

A recurring theme in Andrew’s talk is the distribution of power and accountability. He challenges the assumption that a single individual must always be accountable, advocating for a culture where teams can initiate decisions relevant to their context. By involving the right people at the right time, the advice process mitigates risks associated with uninformed decisions while avoiding the bottlenecks of centralized models. Andrew’s narrative underscores the need for explicit decision-making processes, encouraging organizations to cultivate trust and transparency to navigate the complexities of modern software development.

Links:

PostHeaderIcon [DevoxxGR2024] Socio-Technical Smells: How Technical Problems Cause Organizational Friction by Adam Tornhill

At Devoxx Greece 2024, Adam Tornhill delivered a compelling session on socio-technical smells, emphasizing how technical issues in codebases create organizational friction. Using behavioral code analysis, which combines code metrics with team interaction data, Adam demonstrated how to identify and mitigate five common challenges: architectural coordination bottlenecks, implicit team dependencies, knowledge risks, scaling issues tied to Brooks’s Law, and the impact of bad code on morale and attrition. Through real-world examples from codebases like Facebook’s Folly, Hibernate, ASP.NET Core, and Telegram for Android, he showcased practical techniques to align technical and organizational design, reducing waste and improving team efficiency.

Overcrowded Systems and Brooks’s Law

Adam introduced the concept of overcrowded systems with a story from his past, where a product company’s subsystem, developed by 25 people over two years, faced critical deadlines. After analysis, Adam’s team recommended scrapping the code and rewriting it with just five developers, delivering in two and a half months instead of three. This success highlighted Brooks’s Law (from The Mythical Man-Month, 1975), which states that adding people to a late project increases coordination overhead, delaying delivery. A visualization showed that beyond a certain team size, communication costs outweigh productivity gains. Solutions include shrinking teams to match work modularity or redesigning systems for higher modularity to support parallel work.

Coordination Bottlenecks in Code

Using behavioral code analysis on git logs, Adam identified coordination bottlenecks where multiple developers edit the same files. Visualizations of Facebook’s Folly C++ library revealed a file modified by 58 developers in a year, indicating a “god class” with low cohesion. Code smells like complex if-statements, lengthy comments, and nested logic confirmed this. Similarly, Hibernate’s AbstractEntityPersister class, with over 5,000 lines and 380 methods, showed poor cohesion. By extracting methods into cohesive classes (e.g., lifecycle or proxy), developers can reduce coordination needs, creating natural team boundaries.

Implicit Dependencies and Change Coupling

Adam explored inter-module dependencies using change coupling, a technique that analyzes git commit patterns to find files that co-evolve, revealing logical dependencies not visible in static code. In ASP.NET Core, integration tests showed high cohesion within a package, but an end-to-end Razor Page test coupled with four packages indicated low cohesion and high change costs. In Telegram for Android, a god class (ChatActivity) was a change coupling hub, requiring modifications for nearly every feature. Adam recommended aligning architecture with the problem domain to minimize cross-team dependencies and avoid “shotgun surgery,” where changes scatter across multiple services.

Knowledge Risks and Truck Factor

Adam discussed knowledge risks using the truck factor—the number of developers who can leave before a codebase becomes unmaintainable. In React, with 1,500 contributors, the truck factor is two, meaning 50% of knowledge is lost if two key developers leave. Vue.js has a truck factor of one, risking 70% knowledge loss. Visualizations highlighted files with low truck factors, poor code health, and high activity as onboarding risks. Adam advised prioritizing refactoring of such code to reduce key-person dependencies and ease onboarding, as unfamiliarity often masquerades as complexity.

Bad Code’s Organizational Impact

A study showed that changes to “red” (low-quality) code take up to 10 times longer than to “green” (high-quality) code, with unfamiliar developers needing 50% more time for small tasks and double for larger ones. A story about a German team perceiving an inherited codebase as a “mess” revealed that its issues stemmed from poor onboarding, not technical debt. Adam emphasized addressing root causes—training and onboarding—over premature refactoring. Bad code also lowers morale, increases attrition, and amplifies organizational problems, making socio-technical alignment critical.

Practical Takeaways

Adam’s techniques, supported by tools like CodeScene and research in his book Your Code as a Crime Scene, offer actionable insights:
Use Behavioral Code Analysis: Leverage git logs to detect coordination bottlenecks and change coupling.
Increase Cohesion: Refactor god classes and align architecture with domains to reduce team dependencies.
Mitigate Knowledge Risks: Prioritize refactoring high-risk code with low truck factors to ease onboarding.
Address Root Causes: Invest in onboarding to avoid mistaking unfamiliarity for complexity.
Visualize Patterns: Use tools to highlight socio-technical smells, enabling data-driven decisions.

Links:

PostHeaderIcon [SpringIO2023] Anatomy of a Spring Boot App with Clean Architecture: Steve Pember

In a thought-provoking session at Spring I/O 2023, Steve Pember, a seasoned developer from Boston-based startup Stavi, explored the principles of Clean Architecture and their application within Spring Boot applications. Drawing from Robert Martin’s influential book, Steve demonstrated how Clean Architecture, inspired by patterns like Ports and Adapters and Hexagonal Architecture, fosters readable, flexible, and maintainable codebases. Through a reference application and practical insights, he provided a roadmap for developers to structure Spring Boot apps that remain resilient to change and scalable for large teams.

The Case for Software Architecture

Steve began by addressing the often-misunderstood role of software architecture, challenging the stereotype of architects as mere whiteboard enthusiasts. He likened software architects to their building counterparts, who design every detail from high-level structures to minute specifics. Without proper architecture, Steve warned, systems devolve into unmaintainable “big balls of mud,” slowing development and hindering competitiveness. He highlighted the benefits of well-architected systems—separation of concerns, modularity, testability, and maintainability—arguing that these guardrails enable teams to maintain velocity over time, even if they initially slow development.

Principles of Clean Architecture

Delving into Clean Architecture, Steve outlined its core concepts: SOLID principles, component design, boundaries, and dependency rules. He clarified SOLID principles, such as single responsibility (supporting one user type per class) and dependency inversion (using interfaces), as foundational to clean code. Components, he explained, should be independently developable and loosely coupled, aligning with domain-driven design or microservices. The defining feature of Clean Architecture is its layered structure, where dependencies point inward to a core of business logic, encapsulated by interfaces that shield it from external details like databases or third-party services. This ensures the core remains agnostic, enhancing flexibility and testability.

Implementing Clean Architecture in Spring Boot

Steve demonstrated how to apply Clean Architecture in Spring Boot using a reference shoe store application. He proposed a multi-module structure with three components: core (housing business logic, entities, and services), details (containing database and third-party integrations), and app (where Spring configuration and integration occur). By using interfaces for repositories and gateways, the core remains independent of external systems, allowing seamless swaps, such as replacing a PostgreSQL repository with DynamoDB. Steve emphasized minimal controllers and service classes, advocating for specific, single-responsibility services like CustomerOrderQueryService. He also highlighted the importance of integration tests, using tools like Testcontainers to validate interactions with external systems.

Treating Details as Deferrable

A key takeaway was Steve’s mantra that “everything not in core is a detail.” Databases, environments, input mechanisms, and even Spring itself should be treated as implementation details, deferrable until necessary. He cautioned against premature database schema design, urging developers to prioritize business logic over storage concerns. By encapsulating details behind interfaces, applications become adaptable to changes, such as switching databases or input methods (e.g., HTTP to Kafka). Steve’s demo showcased this flexibility, swapping a PostgreSQL order repository for DynamoDB with minimal code changes, proving the power of Clean Architecture’s plug-in approach.

Links:

PostHeaderIcon [DevoxxBE2012] Architecture All the Way Down

Kirk Knoernschild, a software developer passionate about modular systems and author of “Java Application Architecture,” explored the pervasive nature of architecture in software. Kirk, drawing from his book on OSGi patterns, challenged traditional views, arguing architecture permeates all levels—from high-level designs to code.

He invoked the “turtles all the way down” anecdote to illustrate architecture’s recursive essence: decisions at every layer impact flexibility. Kirk critiqued ivory-tower approaches, advocating collaborative, iterative practices aligning business and technology.

Paradoxically, architecture aims for change resistance yet adaptability. Temporal dimensions—decisions’ longevity—affect modularity: stable elements form foundations, volatile ones remain flexible.

Kirk linked SOA’s service granularity to modularity, noting services as deployable units fostering reuse. He emphasized patterns ensuring evolvability without rigidity.

Demystifying Architectural Paradoxes

Kirk elaborated on architecture’s dual goals: stability against volatility. He used examples where over-design stifles agility, advocating minimal upfront planning with evolutionary refinement.

Temporal hierarchies classify decisions by change frequency: strategic (years), tactical (months), operational (days). This guides layering: stable cores support variable extensions.

Granularity and Modularity Principles

Discussing granularity, Kirk warned against extremes: monolithic systems hinder reuse; overly fine-grained increase complexity. Patterns like base and dependency injection promote loose coupling.

He showcased OSGi’s runtime modularity, enforcing boundaries via exports/imports, preventing spaghetti code.

Linking Design to Temporal Decisions

Kirk connected design principles—SOLID—to temporal aspects: single responsibility minimizes change impact; open-closed enables extension without modification.

He illustrated with code: classes as small modules, packages as mid-level, OSGi bundles as deployable.

SOA and Modular Synergies

In SOA, services mirror modules: autonomous, composable. Kirk advocated aligning service boundaries with business domains, using modularity patterns for internal structure.

He critiqued layered architectures fostering silos, preferring vertical slices for cohesion.

Practical Implementation and Tools

Kirk recommended modular frameworks like OSGi or Jigsaw, but stressed design paradigms over tools. Patterns catalog aids designing evolvable systems.

He concluded: multiple communication levels—classes to services—enhance understanding, urging focus on modularity for adaptive software.

Kirk’s insights reframed architecture as holistic, from code to enterprise, essential for enduring systems.

Links:

PostHeaderIcon [DevoxxBE2012] When Geek Leaks

Neal Ford, a software architect at ThoughtWorks and author known for his work on enterprise applications, delivered a keynote exploring “geek leaking”—the spillover of deep expertise from one domain into another, fostering innovation. Neal, an international speaker with insights into design and delivery, tied this concept to his book “Presentation Patterns,” but expanded it to broader intellectual pursuits.

He defined “geek” as an enthusiast whose passion in one area influences others, creating synergies. Neal illustrated with examples like Richard Feynman’s interdisciplinary contributions, from physics to biology, showing how questioning fundamentals drives breakthroughs.

Neal connected this to software, urging developers to apply scientific methods—hypothesis, experimentation, analysis—to projects. He critiqued over-reliance on authority, advocating first-principles thinking to challenge assumptions.

Drawing from history, Neal discussed how paradigm shifts, like Galileo’s heliocentrism, exemplify geek leaking by integrating new evidence across fields.

In technology, he highlighted tools enabling this, such as domain-specific languages blending syntaxes for efficiency.

Origins of Intellectual Cross-Pollination

Neal traced geek leaking to Feynman’s life, where physics informed lock-picking and bongo playing, emphasizing curiosity over rote knowledge. He paralleled this to software, where patterns from one language inspire another.

He referenced Thomas Kuhn’s “Structure of Scientific Revolutions,” explaining how anomalies lead to paradigm shifts, akin to evolving tech stacks.

Applying Scientific Rigor in Development

Neal advocated embracing hypotheses in coding, testing ideas empirically rather than debating theoretically. He cited examples like performance tuning, where measurements debunk intuitions.

He introduced the “jeweler’s hammer”—gentle taps revealing flaws—urging subtle probes in designs to uncover weaknesses early.

Historical Lessons and Modern Tools

Discussing Challenger disaster, Neal showed Feynman’s simple demonstration exposing engineering flaws, stressing clarity in communication.

He critiqued poor presentations, linking to Edward Tufte’s analysis of Columbia shuttle slides, where buried details caused tragedy.

Neal promoted tools like DSLs for expressive code, and polyglot programming to borrow strengths across languages.

Fostering Innovation Through Curiosity

Encouraging geek leaking, Neal suggested exploring adjacent fields, like biology informing algorithms (genetic programming).

He emphasized self-skepticism, quoting Feynman on fooling oneself, and applying scientific method to validate ideas.

Neal concluded by urging first-principles reevaluation, ensuring solutions align with core problems, not outdated assumptions.

His keynote inspired developers to let expertise leak, driving creative, robust solutions.

Links:

PostHeaderIcon [DevoxxFR2012] There Is No Good Business Model: Rethinking Domain Modeling for Service-Oriented Design and Implementation

Lecturer

Grégory Weinbach has cultivated more than twenty years of experience in software development, spanning a diverse spectrum of responsibilities that range from sophisticated tooling and code generation frameworks to agile domain modeling and the practical application of Domain Driven Design principles. His professional journey reflects a deliberate pursuit of polyvalence, enabling him to operate fluidly across the entire software development lifecycle—from gathering nuanced user requirements to implementing robust, low-level solutions. Grégory maintains a discerning and critical perspective on prevailing methodologies, whether they manifest as Agile practices, Model-Driven Architecture, Service-Oriented Architecture, or the contemporary Software Craftsmanship movement, always prioritizing the fundamental question of “why” before addressing the mechanics of “how.” He is a frequent speaker at various technical forums, including the Paris Java User Group and all five editions of the MDDay conference, and regularly conducts in-depth seminars for enterprise clients on pragmatic modeling techniques that balance theoretical rigor with real-world applicability.

Abstract

Grégory Weinbach delivers a provocative and intellectually rigorous examination of a widely held misconception in software design: the notion that a “good” domain model must faithfully mirror the intricacies of the underlying business reality. He argues persuasively that software systems do not replicate the business world in its entirety but rather operationalize specific, value-delivering services within constrained computational contexts. Through a series of meticulously constructed case studies, comparative analyses, and conceptual diagrams, Grégory demonstrates how attempts to create comprehensive, “truthful” business models inevitably lead to bloated, inflexible codebases that become increasingly difficult to maintain and evolve. In contrast, he advocates for a service-oriented modeling approach where domain models are deliberately scoped, context-bound artifacts designed to support concrete use cases and implementation requirements. The presentation delves deeply into the critical distinction between business models and domain models, the strategic use of bounded contexts as defined in Domain Driven Design, and practical techniques for aligning technical architectures with organizational service boundaries. The implications of this paradigm shift are profound, encompassing reduced developer cognitive load, enhanced system evolvability, accelerated delivery cycles, and the cultivation of sustainable software architectures that remain resilient in the face of changing business requirements.

The Fallacy of Universal Truth: Why Business Reality Cannot Be Fully Encapsulated in Code

Grégory Weinbach commences his discourse with a bold and counterintuitive assertion: the persistent belief that effective software modeling requires a direct, isomorphic mapping between code structures and real-world business concepts represents a fundamental and pervasive error in software engineering practice. He elucidates that while business models—typically expressed through process diagrams, organizational charts, and natural language descriptions—serve to communicate and analyze human activities within an enterprise, domain models in software exist for an entirely different purpose: to enable the reliable, efficient, and maintainable execution of specific computational tasks. Attempting to construct a single, monolithic model that captures the full complexity of a business domain inevitably results in an unwieldy artifact that attempts to reconcile inherently contradictory perspectives, leading to what Weinbach terms “model schizophrenia.” He illustrates this phenomenon through a detailed examination of a retail enterprise scenario, where a unified model encompassing inventory management, customer relationship management, financial accounting, and regulatory compliance creates a labyrinthine network of interdependent entities. A modification to inventory valuation rules, for instance, might inadvertently cascade through customer segmentation logic and tax calculation modules, introducing subtle bugs and requiring extensive regression testing across unrelated functional areas.

Bounded Contexts as Cognitive and Architectural Boundaries: The Domain Driven Design Solution

Building upon Eric Evans’ foundational concepts in Domain Driven Design, Weinbach introduces bounded contexts as the primary mechanism for resolving the contradictions inherent in universal modeling approaches. A bounded context defines a specific semantic boundary within which a particular model and its associated ubiquitous language hold true without ambiguity. He argues that each bounded context deserves its own dedicated model, even when multiple contexts reference conceptually similar entities. For example, the notion of a “customer” within a marketing analytics context—characterized by behavioral attributes, segmentation tags, and lifetime value calculations—bears little structural or behavioral resemblance to the “customer” entity in a legal compliance context, which must maintain immutable audit trails, contractual obligations, and regulatory identifiers. Weinbach presents a visual representation of these distinct contexts, showing how the marketing model might employ lightweight, denormalized structures optimized for analytical queries, while the compliance model enforces strict normalization, versioning, and cryptographic signing. This deliberate separation not only prevents the contamination of precise business rules but also enables independent evolution of each model in response to domain-specific changes, dramatically reducing the blast radius of modifications.

Service-Oriented Modeling: From Abstract Nouns to Deliverable Verbs

Weinbach pivots from theoretical critique to practical prescription by advocating a service-oriented lens for domain modeling, where the primary organizing principle is not the static structure of business entities but the dynamic delivery of specific, value-adding services. He contends that traditional approaches often fall into the trap of “noun-centric” modeling, where developers attempt to create comprehensive representations of business objects loaded with every conceivable attribute and behavior, resulting in god classes that violate the single responsibility principle and become impossible to test or modify. Instead, he proposes that models should be constructed around concrete service verbs—”process payment,” “generate invoice,” “validate shipment”—with each model containing only the minimal set of concepts required to fulfill that service’s contract. Through a logistics case study, Weinbach demonstrates how modeling the “track shipment” service yields a streamlined aggregate consisting of a shipment identifier, a sequence of timestamped status events, and a current location, purposefully omitting unrelated concerns such as inventory levels or billing details. This focused approach not only produces cleaner, more maintainable code but also naturally aligns technical boundaries with organizational responsibilities, facilitating clearer communication between development teams and business stakeholders.

The Human Factor: Reducing Cognitive Load and Enhancing Team Autonomy

One of the most compelling arguments Weinbach advances concerns the human dimension of software development. Universal models, by their very nature, require developers to maintain a vast mental map of interrelationships and invariants across the entire system, dramatically increasing cognitive load and the likelihood of errors. Service-oriented, context-bound models, conversely, allow developers to focus their attention on a well-defined subset of the domain, mastering a smaller, more coherent set of concepts and rules. This reduction in cognitive complexity translates directly into improved productivity, fewer defects, and greater job satisfaction. Moreover, when technical boundaries mirror organizational boundaries—such as when the team responsible for order fulfillment owns the order processing context—they gain true autonomy to evolve their domain model in response to business needs without coordinating with unrelated teams, accelerating delivery cycles and fostering a sense of ownership and accountability.

Practical Implementation Strategies: From Analysis to Code

Weinbach concludes his presentation with a comprehensive set of practical guidelines for implementing service-oriented modeling in real-world projects. He recommends beginning with event storming workshops that identify key business events and the services that produce or consume them, rather than starting with entity relationship diagrams. From these events, teams can derive bounded contexts and their associated models, using techniques such as context mapping to document integration patterns between contexts. He demonstrates code examples showing how anti-corruption layers translate between context-specific models when integration is required, preserving the integrity of each bounded context while enabling necessary data flow. Finally, Weinbach addresses the challenging task of communicating these principles to non-technical stakeholders, who may initially resist the idea of “duplicating” data across models. He explains that while information duplication is indeed undesirable, data duplication across different representational contexts is not only acceptable but necessary when those contexts serve fundamentally different purposes.

Links: