Posts Tagged ‘SoftwareArchitecture’
[DevoxxUK2024] How We Decide by Andrew Harmel-Law
Andrew Harmel-Law, a Tech Principal at Thoughtworks, delivered a profound session at DevoxxUK2024, dissecting the art and science of decision-making in software development. Drawing from his experience as a consultant and his work on a forthcoming book about software architecture, Andrew argues that decisions, both conscious and unconscious, form the backbone of software systems. His talk explores various decision-making approaches, their implications for modern, decentralized teams, and introduces the advice process as a novel framework for balancing speed, decentralization, and accountability.
The Anatomy of Decision-Making
Andrew begins by framing software architecture as the cumulative result of myriad decisions, from coding minutiae to strategic architectural choices. He introduces a refined model of decision-making comprising three stages: option making, decision taking, and decision sharing. Option making involves generating possible solutions, drawing on patterns, stakeholder needs, and past experiences. Decision taking, often the most scrutinized phase, requires selecting one option, inherently rejecting others, which Andrew describes as a “wicked problem” due to its complexity and lack of a perfect solution. Decision sharing ensures effective communication to implementers, a step frequently fumbled when architects and developers are disconnected.
Centralized Decision-Making Approaches
Andrew outlines three centralized decision-making models: autocratic, delegated, and consultative. In the autocratic approach, a single individual—often a chief architect—handles all stages, enabling rapid decisions but risking bottlenecks and poor sharing. Delegation involves the autocrat assigning decision-making to others, potentially improving outcomes by leveraging specialized expertise, though it remains centralized. The consultative approach sees the decision-maker seeking input from others but retaining ultimate authority, which can enhance decision quality but slows the process. Andrew emphasizes that while these methods can be swift, they concentrate power, limiting scalability in large organizations.
Decentralized Decision-Making Models
Transitioning to decentralized approaches, Andrew discusses consent, democratic, and consensus models. The consent model allows a single decision-maker to propose options, subject to veto by affected parties, shifting some power outward but risking gridlock. The democratic model, akin to Athenian direct democracy, involves voting on options, reducing the veto power of individuals but potentially marginalizing minority concerns. Consensus seeks universal agreement, maximizing inclusion but often stalling due to the pursuit of perfection. Andrew notes that decentralized models distribute power more widely, enhancing collaboration but sacrificing speed, particularly in consensus-driven processes.
The Advice Process: A Balanced Approach
To address the trade-offs between speed and decentralization, Andrew introduces the advice process, a framework where anyone can initiate and make decisions, provided they seek advice from affected parties and experts. Unlike permission, advice is non-binding, preserving the decision-maker’s autonomy while fostering trust and collaboration. This approach aligns with modern autonomous teams, allowing decisions to emerge organically without relying on a fixed authority. Andrew cites the Open Agile Architecture Framework, which supports this model by emphasizing documented accountability, such as through Architecture Decision Records (ADRs). The advice process minimizes unnecessary sharing, ensuring efficiency while empowering teams.
Navigating Power and Accountability
A recurring theme in Andrew’s talk is the distribution of power and accountability. He challenges the assumption that a single individual must always be accountable, advocating for a culture where teams can initiate decisions relevant to their context. By involving the right people at the right time, the advice process mitigates risks associated with uninformed decisions while avoiding the bottlenecks of centralized models. Andrew’s narrative underscores the need for explicit decision-making processes, encouraging organizations to cultivate trust and transparency to navigate the complexities of modern software development.
Links:
[DevoxxGR2024] Socio-Technical Smells: How Technical Problems Cause Organizational Friction by Adam Tornhill
At Devoxx Greece 2024, Adam Tornhill delivered a compelling session on socio-technical smells, emphasizing how technical issues in codebases create organizational friction. Using behavioral code analysis, which combines code metrics with team interaction data, Adam demonstrated how to identify and mitigate five common challenges: architectural coordination bottlenecks, implicit team dependencies, knowledge risks, scaling issues tied to Brooks’s Law, and the impact of bad code on morale and attrition. Through real-world examples from codebases like Facebook’s Folly, Hibernate, ASP.NET Core, and Telegram for Android, he showcased practical techniques to align technical and organizational design, reducing waste and improving team efficiency.
Overcrowded Systems and Brooks’s Law
Adam introduced the concept of overcrowded systems with a story from his past, where a product company’s subsystem, developed by 25 people over two years, faced critical deadlines. After analysis, Adam’s team recommended scrapping the code and rewriting it with just five developers, delivering in two and a half months instead of three. This success highlighted Brooks’s Law (from The Mythical Man-Month, 1975), which states that adding people to a late project increases coordination overhead, delaying delivery. A visualization showed that beyond a certain team size, communication costs outweigh productivity gains. Solutions include shrinking teams to match work modularity or redesigning systems for higher modularity to support parallel work.
Coordination Bottlenecks in Code
Using behavioral code analysis on git logs, Adam identified coordination bottlenecks where multiple developers edit the same files. Visualizations of Facebook’s Folly C++ library revealed a file modified by 58 developers in a year, indicating a “god class” with low cohesion. Code smells like complex if-statements, lengthy comments, and nested logic confirmed this. Similarly, Hibernate’s AbstractEntityPersister class, with over 5,000 lines and 380 methods, showed poor cohesion. By extracting methods into cohesive classes (e.g., lifecycle or proxy), developers can reduce coordination needs, creating natural team boundaries.
Implicit Dependencies and Change Coupling
Adam explored inter-module dependencies using change coupling, a technique that analyzes git commit patterns to find files that co-evolve, revealing logical dependencies not visible in static code. In ASP.NET Core, integration tests showed high cohesion within a package, but an end-to-end Razor Page test coupled with four packages indicated low cohesion and high change costs. In Telegram for Android, a god class (ChatActivity) was a change coupling hub, requiring modifications for nearly every feature. Adam recommended aligning architecture with the problem domain to minimize cross-team dependencies and avoid “shotgun surgery,” where changes scatter across multiple services.
Knowledge Risks and Truck Factor
Adam discussed knowledge risks using the truck factor—the number of developers who can leave before a codebase becomes unmaintainable. In React, with 1,500 contributors, the truck factor is two, meaning 50% of knowledge is lost if two key developers leave. Vue.js has a truck factor of one, risking 70% knowledge loss. Visualizations highlighted files with low truck factors, poor code health, and high activity as onboarding risks. Adam advised prioritizing refactoring of such code to reduce key-person dependencies and ease onboarding, as unfamiliarity often masquerades as complexity.
Bad Code’s Organizational Impact
A study showed that changes to “red” (low-quality) code take up to 10 times longer than to “green” (high-quality) code, with unfamiliar developers needing 50% more time for small tasks and double for larger ones. A story about a German team perceiving an inherited codebase as a “mess” revealed that its issues stemmed from poor onboarding, not technical debt. Adam emphasized addressing root causes—training and onboarding—over premature refactoring. Bad code also lowers morale, increases attrition, and amplifies organizational problems, making socio-technical alignment critical.
Practical Takeaways
Adam’s techniques, supported by tools like CodeScene and research in his book Your Code as a Crime Scene, offer actionable insights:
– Use Behavioral Code Analysis: Leverage git logs to detect coordination bottlenecks and change coupling.
– Increase Cohesion: Refactor god classes and align architecture with domains to reduce team dependencies.
– Mitigate Knowledge Risks: Prioritize refactoring high-risk code with low truck factors to ease onboarding.
– Address Root Causes: Invest in onboarding to avoid mistaking unfamiliarity for complexity.
– Visualize Patterns: Use tools to highlight socio-technical smells, enabling data-driven decisions.
Links:
[DevoxxBE2012] Architecture All the Way Down
Kirk Knoernschild, a software developer passionate about modular systems and author of “Java Application Architecture,” explored the pervasive nature of architecture in software. Kirk, drawing from his book on OSGi patterns, challenged traditional views, arguing architecture permeates all levels—from high-level designs to code.
He invoked the “turtles all the way down” anecdote to illustrate architecture’s recursive essence: decisions at every layer impact flexibility. Kirk critiqued ivory-tower approaches, advocating collaborative, iterative practices aligning business and technology.
Paradoxically, architecture aims for change resistance yet adaptability. Temporal dimensions—decisions’ longevity—affect modularity: stable elements form foundations, volatile ones remain flexible.
Kirk linked SOA’s service granularity to modularity, noting services as deployable units fostering reuse. He emphasized patterns ensuring evolvability without rigidity.
Demystifying Architectural Paradoxes
Kirk elaborated on architecture’s dual goals: stability against volatility. He used examples where over-design stifles agility, advocating minimal upfront planning with evolutionary refinement.
Temporal hierarchies classify decisions by change frequency: strategic (years), tactical (months), operational (days). This guides layering: stable cores support variable extensions.
Granularity and Modularity Principles
Discussing granularity, Kirk warned against extremes: monolithic systems hinder reuse; overly fine-grained increase complexity. Patterns like base and dependency injection promote loose coupling.
He showcased OSGi’s runtime modularity, enforcing boundaries via exports/imports, preventing spaghetti code.
Linking Design to Temporal Decisions
Kirk connected design principles—SOLID—to temporal aspects: single responsibility minimizes change impact; open-closed enables extension without modification.
He illustrated with code: classes as small modules, packages as mid-level, OSGi bundles as deployable.
SOA and Modular Synergies
In SOA, services mirror modules: autonomous, composable. Kirk advocated aligning service boundaries with business domains, using modularity patterns for internal structure.
He critiqued layered architectures fostering silos, preferring vertical slices for cohesion.
Practical Implementation and Tools
Kirk recommended modular frameworks like OSGi or Jigsaw, but stressed design paradigms over tools. Patterns catalog aids designing evolvable systems.
He concluded: multiple communication levels—classes to services—enhance understanding, urging focus on modularity for adaptive software.
Kirk’s insights reframed architecture as holistic, from code to enterprise, essential for enduring systems.
Links:
[DevoxxBE2012] When Geek Leaks
Neal Ford, a software architect at ThoughtWorks and author known for his work on enterprise applications, delivered a keynote exploring “geek leaking”—the spillover of deep expertise from one domain into another, fostering innovation. Neal, an international speaker with insights into design and delivery, tied this concept to his book “Presentation Patterns,” but expanded it to broader intellectual pursuits.
He defined “geek” as an enthusiast whose passion in one area influences others, creating synergies. Neal illustrated with examples like Richard Feynman’s interdisciplinary contributions, from physics to biology, showing how questioning fundamentals drives breakthroughs.
Neal connected this to software, urging developers to apply scientific methods—hypothesis, experimentation, analysis—to projects. He critiqued over-reliance on authority, advocating first-principles thinking to challenge assumptions.
Drawing from history, Neal discussed how paradigm shifts, like Galileo’s heliocentrism, exemplify geek leaking by integrating new evidence across fields.
In technology, he highlighted tools enabling this, such as domain-specific languages blending syntaxes for efficiency.
Origins of Intellectual Cross-Pollination
Neal traced geek leaking to Feynman’s life, where physics informed lock-picking and bongo playing, emphasizing curiosity over rote knowledge. He paralleled this to software, where patterns from one language inspire another.
He referenced Thomas Kuhn’s “Structure of Scientific Revolutions,” explaining how anomalies lead to paradigm shifts, akin to evolving tech stacks.
Applying Scientific Rigor in Development
Neal advocated embracing hypotheses in coding, testing ideas empirically rather than debating theoretically. He cited examples like performance tuning, where measurements debunk intuitions.
He introduced the “jeweler’s hammer”—gentle taps revealing flaws—urging subtle probes in designs to uncover weaknesses early.
Historical Lessons and Modern Tools
Discussing Challenger disaster, Neal showed Feynman’s simple demonstration exposing engineering flaws, stressing clarity in communication.
He critiqued poor presentations, linking to Edward Tufte’s analysis of Columbia shuttle slides, where buried details caused tragedy.
Neal promoted tools like DSLs for expressive code, and polyglot programming to borrow strengths across languages.
Fostering Innovation Through Curiosity
Encouraging geek leaking, Neal suggested exploring adjacent fields, like biology informing algorithms (genetic programming).
He emphasized self-skepticism, quoting Feynman on fooling oneself, and applying scientific method to validate ideas.
Neal concluded by urging first-principles reevaluation, ensuring solutions align with core problems, not outdated assumptions.
His keynote inspired developers to let expertise leak, driving creative, robust solutions.