Posts Tagged ‘CodeQuality’
[KotlinConf2025] Code Quality at Scale: Future Proof Your Android Codebase with KtLint and Detekt
Managing a large, multi-team codebase is a monumental task, especially when it has evolved over many years. Introducing architectural changes and maintaining consistency across autonomous teams adds another layer of complexity. In a comprehensive discussion, Tristan Hamilton, a distinguished member of the HubSpot team, presented a strategic approach to future-proofing Android codebases by leveraging static analysis tools like KtLint and Detekt.
Tristan began by framing the challenges inherent in a codebase that has grown and changed for over eight years. He emphasized that without robust systems, technical debt can accumulate, and architectural principles can erode as different teams introduce their own patterns. The solution, he proposed, lies in integrating automated guardrails directly into the continuous integration (CI) pipeline. This proactive approach ensures a consistent level of code quality and helps prevent the introduction of new technical debt.
He then delved into the specifics of two powerful static analysis tools: KtLint and Detekt. KtLint, as a code linter, focuses on enforcing consistent formatting and style, ensuring that the codebase adheres to a single, readable standard. Detekt, on the other hand, is a more powerful static analysis tool that goes beyond simple style checks. Tristan highlighted its ability to perform advanced analysis, including type resolution, which allows it to enforce architectural patterns and detect complex code smells that a simple linter might miss. He shared practical examples of how Detekt can be used to identify and refactor anti-patterns, such as excessive class size or complex methods, thereby improving the overall health of the codebase.
A significant part of the talk was dedicated to a specific, and crucial, application of these tools: safely enabling R8, the code shrinker and optimizer, in a multi-module Android application. The process is notoriously difficult and can often lead to runtime crashes if not handled correctly. Tristan showcased how custom Detekt rules could be created to enforce specific architectural principles at build time. For instance, a custom rule could ensure that certain classes are not obfuscated or that specific dependencies are correctly handled, effectively creating automated safety nets. This approach allowed the HubSpot team to gain confidence in their R8 configuration and ship with greater speed and reliability.
Tristan concluded by offering a set of key takeaways for developers and teams. He underscored the importance of moving beyond traditional static analysis and embracing tools that can codify architectural patterns. By automating the enforcement of these patterns, teams can ensure the integrity of their codebase, even as it grows and evolves. This strategy not only reduces technical debt but also prepares the codebase for future changes, including the integration of new technologies and methodologies, such as Large Language Model (LLM) generated code. It is a powerful method for building robust, maintainable, and future-ready software.
Links:
[DevoxxUK2025] Software Excellence in Large Orgs through Technical Coaching
Emily Bache, a seasoned technical coach, shared her expertise at DevoxxUK2025 on fostering software excellence in large organizations through technical coaching. Drawing on DORA research, which correlates high-quality code with faster delivery and better organizational outcomes, Emily emphasized practices like test-driven development (TDD) and refactoring to maintain code quality. She introduced technical coaching as a vital role, involving short, interactive learning hours and ensemble programming to build developer skills. Her talk, enriched with a refactoring demo and insights from Hartman’s proficiency taxonomy, offered a roadmap for organizations to reduce technical debt and enhance team performance.
The Importance of Code Quality
Emily began by referencing DORA research, which highlights capabilities like test automation, code maintainability, and small-batch development as predictors of high-performing teams. She cited a study by Adam Tornhill and Marcus Borie, showing that poor-quality code can increase development time by up to 124%, with worst-case scenarios taking nine times longer. Technical debt, or “cruft,” slows feature delivery and makes schedules unpredictable. Practices like TDD, refactoring, pair programming, and clean architecture are essential to maintain code quality, ensuring software remains flexible and cost-effective to modify over time.
Technical Coaching as a Solution
In large organizations, Emily noted a gap in technical leadership, with architects often focused on high-level design and teams lacking dedicated tech leads. Technical coaches bridge this gap, working part-time across teams to teach skills and foster a quality culture. Unlike code reviews, which reinforce existing knowledge, coaching proactively builds skills through hands-on training. Emily’s approach involves collaborating with architects and tech leads, aligning with organizational goals while addressing low-level design practices like TDD and refactoring, which are often neglected but critical for maintainable code.
Learning Hours for Skill Development
Emily’s learning hours are short, interactive sessions inspired by Sharon Bowman’s training techniques. Developers work in pairs on exercises, such as refactoring katas (e.g., Tennis Refactoring Kata), to practice skills like extracting methods and naming conventions. A demo showcased decomposing a complex method into readable, well-named functions, emphasizing deterministic refactoring tools over AI assistants, which excel at writing new code but struggle with refactoring. These sessions teach vocabulary for discussing code quality and provide checklists for applying skills, ensuring developers can immediately use what they learn.
Ensemble Programming for Real-World Application
Ensemble programming brings teams together to work on production code under a coach’s guidance. Unlike toy exercises, these sessions tackle real, complex problems, allowing developers to apply TDD and refactoring in context. Emily highlighted the collaborative nature of ensembles, where senior developers mentor juniors, fostering team learning. By addressing production code, coaches ensure skills translate to actual work, bridging the gap between training and practice. This approach helps teams internalize techniques like small-batch development and clean design, improving code quality incrementally.
Hartman’s Proficiency Taxonomy
Emily introduced Hartman’s proficiency taxonomy to explain skill acquisition, contrasting it with Bloom’s thinking-focused taxonomy. The stages—familiarity, comprehension, conscious effort, conscious action, proficiency, and expertise—map the journey from knowing a skill exists to applying it fluently in production. Learning hours help developers move from familiarity to conscious effort with exercises and feedback, while ensembles push them toward proficiency by applying skills to real code. Coaches tailor interventions based on a team’s proficiency level, ensuring steady progress toward mastery.
Getting Started with Technical Coaching
Emily encouraged organizations to adopt technical coaching, ideally led by tech leads with management support to allocate time for mentoring. She shared resources from her Samman Coaching website, including kata descriptions and learning hour guides, available through her nonprofit society for technical coaches. For mixed-experience teams, she pairs senior developers with juniors to foster mentoring, turning diversity into a strength. Her book, Samman Technical Coaching, and monthly online meetups provide further support for aspiring coaches, aiming to spread best practices and elevate code quality across organizations.
Links:
[DevoxxBE2024] AI and Code Quality: Building a Synergy with Human Intelligence by Arthur Magne
In a session at Devoxx Belgium 2024, Arthur Magne, CPO and co-founder of Packmind, explored how AI can enhance code quality when guided by human expertise. Addressing the rapid rise of AI-generated code through tools like GitHub Copilot, Arthur highlighted the risks of amplifying poor practices and the importance of aligning AI outputs with team standards. His talk showcased Packmind’s approach to integrating AI with human-defined best practices, enabling teams to maintain high-quality, maintainable code while leveraging AI’s potential to accelerate learning and enforce standards.
The Double-Edged Sword of AI in Software Development
Arthur opened with Marc Andreessen’s 2011 phrase, “Software is eating the world,” updating it to reflect AI’s current dominance in code generation. Tools like GitHub Copilot and Codium produce vast amounts of code, but their outputs reflect the quality of their training data—often outdated or flawed, as noted by Veracode’s Chris Wysopal. A 2024 Uplevel study found 41% more bugs in AI-assisted code among 800 developers, and GitLab’s 2023 report showed a 100% increase in code churn since AI’s rise in 2022, indicating potential technical debt. Arthur argued that while AI boosts individual productivity (88% of developers feel faster, per GitHub), team-level benefits are limited without proper guidance, as code reviews and bug fixes offset time savings.
The Role of Human Guidance in AI-Driven Development
AI lacks context about team-specific practices, such as security, accessibility, or architecture preferences, leading to generic or suboptimal code. Arthur emphasized the need for human guidance to steer AI outputs. By explicitly defining best practices—covering frameworks like Spring, security protocols, or testing strategies—teams can ensure AI generates code aligned with their standards. However, outdated documentation, like neglected Confluence pages, can mislead AI, amplifying hidden issues. Arthur advocated for a critical human-in-the-loop approach, where developers validate AI suggestions and integrate company context to produce high-quality, maintainable code.
Packmind’s Solution: AI as a Technical Coach
Packmind, a tool developed over four years, acts as an IDE and browser extension for platforms like GitHub and GitLab, enabling teams to define and share best practices. Arthur demonstrated how Packmind identifies practices during code reviews, such as preferring flatMap over for loops with concatenation in TypeScript or Java for performance. Developers can flag negative examples (e.g., inefficient loops) or positive ones (e.g., standardized loggers) and create structured practice descriptions with AI assistance, including “what,” “why,” and “how to fix” sections. These practices are validated through team discussions or communities of practice, ensuring consensus before enforcement. Packmind’s AI suggests improvements, generates guidelines, and integrates with tools like GitHub Copilot to produce code adhering to team standards.
Enforcing Standards and Upskilling Teams
Once validated, practices are enforced via Packmind’s IDE extension, which flags violations and suggests fixes tailored to team conventions, akin to a customized SonarQube. For example, a team preferring TestNG over JUnit can configure AI to generate compliant test cases. Arthur highlighted Packmind’s role in upskilling, allowing junior developers to propose practices and learn from team feedback. AI-driven practice reviews, conducted biweekly or monthly, foster collaboration and spread expertise across organizations. Studies cited by Arthur suggest that teams using AI without understanding underlying practices struggle to maintain code post-project, underscoring the need for AI to augment, not replace, human expertise.
Balancing Productivity and Long-Term Quality
Quoting Kent Beck, Arthur noted that AI automates 80% of repetitive tasks, freeing developers to focus on high-value expertise. Packmind’s process ensures AI amplifies team knowledge rather than generic patterns, reducing code review overhead and technical debt. By pushing validated practices to tools like GitHub Copilot, teams achieve consistent, high-quality code. Arthur concluded by stressing the importance of explicit standards and critical evaluation to harness AI’s potential, inviting attendees to discuss further at Packmind’s booth. His talk underscored a synergy where AI accelerates development while human intelligence ensures lasting quality.
Links:
[PHPForumParis2021] Automatic Type Inference in PHP – Damien Seguy
Damien Seguy, a veteran of the PHP community and a key figure in AFUP’s early days, delivered an insightful presentation at Forum PHP 2021 on the transformative potential of automatic type inference in PHP. With extensive experience in code quality, Damien explored how static analysis tools can enhance PHP’s type system, reducing errors and improving maintainability. His talk, grounded in practical examples, offered a compelling case for leveraging automation to strengthen PHP applications. This post examines four key themes: the evolution of PHP typing, benefits of static analysis, transforming arrays into objects, and practical implementation strategies.
The Evolution of PHP Typing
Damien Seguy opened by tracing the journey of PHP’s type system, from its loosely typed origins to the robust features introduced in recent versions. He highlighted how PHP’s gradual typing, with features like scalar type hints and return types, has improved code reliability. Damien emphasized that automatic type inference, supported by tools like PHPStan and Psalm, takes this further by detecting types without explicit declarations. This evolution, informed by his work at Exakat, enables developers to write safer, more predictable code.
Benefits of Static Analysis
A core focus of Damien’s talk was the power of static analysis in catching errors early. By analyzing code before execution, tools like PHPStan can identify type mismatches, undefined variables, and other issues that might only surface at runtime. Damien shared examples where static analysis prevented bugs in complex projects, enhancing code quality without requiring extensive manual type annotations. This approach, he argued, reduces debugging time and fosters confidence in large-scale PHP applications, aligning with modern development practices.
Transforming Arrays into Objects
Damien advocated for converting arrays into objects to enhance semantic clarity and type safety. He explained that arrays, often used for lists, lack the structural guarantees of objects. By defining classes with named properties, developers can leverage static analysis to catch errors like misspelled keys early. Drawing from his experience, Damien demonstrated how this transformation adds value to codebases, making them more maintainable and less prone to runtime errors, particularly in projects with complex data structures.
Practical Implementation Strategies
Concluding his presentation, Damien shared practical strategies for integrating type inference into PHP workflows. He recommended starting with simple static analysis checks and gradually adopting stricter rules as teams gain confidence. By using tools like Exakat, developers can automate type inference across legacy and new codebases. Damien’s insights emphasized incremental adoption, ensuring that teams can improve code quality without overwhelming refactoring efforts, making type inference accessible to all PHP developers.