Posts Tagged ‘SoftwareQuality’
[DevoxxUK2025] Cracking the Code Review
Paco van Beckhoven, a senior software engineer at Hexagon’s HXDR division, delivered a comprehensive session at DevoxxUK2025 on improving code reviews to enhance code quality and team collaboration. Drawing from his experience with a cloud-based platform for 3D scans, Paco outlined strategies to streamline pull requests, provide constructive feedback, and leverage automated tools. Highlighting the staggering $316 billion cost of fixing bugs in 2013, he emphasized code reviews as a critical defense against defects. His practical tactics, from crafting concise pull requests to automating style checks, aim to reduce friction, foster learning, and elevate software quality, making code reviews a collaborative and productive process.
Streamlining Pull Requests
Paco stressed the importance of concise, well-documented pull requests to facilitate reviews. He advocated for descriptive titles, inspired by conventional commits, that include ticket numbers and context, such as “Fix null pointer in payment service.” Descriptions should outline the change, link related tickets or PRs, and explain design decisions to preempt reviewer questions. Templates with checklists ensure consistency, reminding developers to update documentation or verify tests. Paco also recommended self-reviewing PRs after a break to catch errors like unused code or typos, adding comments to clarify intent and reduce reviewer effort, ultimately speeding up the process.
Effective Feedback and Collaboration
Delivering constructive feedback is key to effective code reviews, Paco noted. He advised reviewers to start with the PR’s description and existing comments to understand context before diving into code. Reviews should prioritize design and functionality over minor style issues, ensuring tests are thoroughly checked for completeness. To foster collaboration, Paco suggested using “we” instead of “you” in comments to emphasize teamwork, posing questions rather than statements, and providing specific, actionable suggestions. Highlighting positive aspects, especially for junior developers, boosts confidence and encourages participation, creating a supportive review culture.
Leveraging Automated Tools
To reduce noise from trivial issues like code style, Paco showcased tools like Error Prone, OpenRewrite, Spotless, Checkstyle, and ArchUnit. Error Prone catches common mistakes and suggests fixes, while OpenRewrite automates migrations, such as JUnit 4 to 5. Spotless enforces consistent formatting across languages like Java and SQL, and Checkstyle ensures adherence to coding standards. ArchUnit enforces architectural rules, like preventing direct controller-to-persistence calls. Paco advised introducing these tools incrementally, involving the team in rule selection, and centralizing configurations in a parent POM to maintain consistency and minimize manual review efforts.
Links:
[DevoxxFR2014] PIT: Assessing Test Effectiveness Through Mutation Testing
Lecturer
Alexandre Victoor is a Java developer with nearly 15 years of experience, currently serving as an architect at Société Générale. His expertise spans software development, testing practices, and integration of tools for code quality assurance.
Abstract
This article examines the limitations of traditional code coverage metrics and introduces PIT as a mutation testing tool to evaluate the true effectiveness of unit tests. It analyzes how PIT injects faults into code to verify if tests detect them, discusses integration with build tools and SonarQube, and explores performance considerations, providing a deeper understanding of enhancing test suites in software engineering.
Challenges in Traditional Testing Metrics
In software development, particularly when practicing Test-Driven Development (TDD), the emphasis is often on writing tests before implementing functionality. This approach, originally termed “test first,” underscores the critical role of tests as a specification that could theoretically allow recreation of production code if lost. However, assessing the quality of these tests remains challenging.
Common metrics like line coverage and branch coverage indicate which parts of the code are executed during testing but fail to reveal if tests adequately detect defects. For instance, consider a simple function calculating a client price by applying a margin to a market price. Achieving 100% line coverage with a test for a zero-margin scenario does not guarantee detection of errors, such as changing an addition to a subtraction, as the test might still pass.
Complicating matters further, when introducing conditional logic or external dependencies mocked with frameworks like Mockito, 100% branch coverage can be attained without robust error detection. Default mock behaviors might always return zero, masking issues in conditional expressions. Thus, coverage metrics primarily highlight untested code but do not affirm the protective value of existing tests.
This gap necessitates advanced techniques to validate test efficacy, ensuring that modifications or bugs trigger failures. Mutation testing emerges as a solution, systematically introducing faults—termed mutants—into the code and observing if the test suite identifies them.
Implementing Mutation Testing with PIT
PIT, an open-source Java tool, operationalizes mutation testing by generating mutants and rerunning tests against each. If a test fails, the mutant is “killed,” indicating effective detection; if tests pass, the mutant “survives,” signaling a weakness in the test suite.
Integration into continuous integration pipelines is straightforward. After standard compilation and testing, PIT analyzes specified packages for code under test and corresponding test classes. It focuses on unit tests due to their speed and lack of side effects, avoiding interactions with databases or file systems that could complicate results.
PIT’s report details line-by-line coverage and mutation survival, highlighting areas where code executes but faults go undetected. Configuration options address common pitfalls: excluding logging statements to prevent false positives, as frameworks like Log4j or SLF4J calls do not impact functional outcomes; timeouts for mutants creating infinite loops; and parallel execution on multi-core machines to mitigate performance overhead from repeated test runs.
Optimizations include leveraging line coverage to run only relevant tests per mutant and incremental analysis to focus on changed code since the last run. These features make PIT viable for nightly builds, though not yet for every commit in fast-paced environments.
A SonarQube plugin extends PIT’s utility by creating violations for lines covered but not protected against mutants and introducing a “mutation coverage” metric. This represents the percentage of mutants killed; for example, 70% mutation coverage implies a 70% chance of detecting introduced anomalies.
Practical Implications and Recommendations
Adopting PIT requires team maturity in testing practices; starting with mutation testing without established TDD might be premature. For teams with solid unit tests, PIT reveals subtle deficiencies, encouraging refinements that bolster code reliability.
In real projects, well-TDD’ed code often shows high mutation coverage, aligning with 70-80% line coverage thresholds as acceptable benchmarks. Performance tuning, such as multi-threading and incremental modes, addresses scalability concerns.
Ultimately, PIT transforms testing from a coverage-focused exercise to one emphasizing defect detection, fostering more resilient software. Its ease of use—via command line, Ant, Gradle, or Maven—democratizes advanced quality assurance, urging developers to integrate it for comprehensive test validation.
Links:
[DevoxxFR2012] Toward Sustainable Software Development – Quality, Productivity, and Longevity in Software Engineering
Frédéric Dubois brings ten years of experience in JEE architecture, agile practices, and software quality. A pragmatist at heart, he focuses on continuous improvement, knowledge sharing, and sustainable delivery over rigid processes.
This article expands Frédéric Dubois’s 2012 talk into a manifesto for sustainable software development. Rejecting the idea that quality is expensive, he proves that technical excellence drives long-term productivity. A three-year-old application should not be unmaintainable. Yet many teams face escalating costs with each new feature. Dubois challenged the audience: productivity is not about delivering more features faster today, but about maintaining velocity tomorrow, next year, and five years from now.
The True Cost of Technical Debt
Quality and productivity are intimately linked, but not in the way most assume. High quality reduces defects, simplifies evolution, and prevents technical debt. Low quality creates a vicious cycle of bugs, rework, and frustration. Dubois shared a case study: a banking application delivered on time but with poor design. Two years later, a simple change required three months of work. The same team, using TDD and refactoring, built a similar system in half the time with one-tenth the defects.
Agile Practices for Long-Term Velocity
Agile practices, when applied pragmatically, enable sustainability. Short feedback loops, automated tests, and collective ownership prevent knowledge silos. Fixed-price contracts and outsourcing often incentivize cutting corners. Transparency, shared metrics, and demo-driven development align business and technical goals.
Links
Relevant links include the original video at YouTube: Toward Sustainable Development.