Archive for the ‘en-US’ Category
Renovate/Dependabot: How to Take Control of Dependency Updates
At Devoxx France 2024, held in April at the Palais des Congrès in Paris, Jean-Philippe Baconnais and Lise Quesnel, consultants at Zenika, presented a 30-minute talk titled Renovate/Dependabot, ou comment reprendre le contrôle sur la mise à jour de ses dépendances. The session explored how tools like Dependabot and Renovate automate dependency updates, reducing the tedious and error-prone manual process. Through a demo and lessons from open-source and client projects, they shared practical tips for implementing Renovate, highlighting its benefits and pitfalls. 🚀
The Pain of Dependency Updates
The talk opened with a relatable skit: Lise, working on a side project (a simple Angular 6 app showcasing women in tech), admitted to neglecting updates due to the effort involved. Jean-Philippe emphasized that this is a common issue across projects, especially in microservice architectures with numerous components. Updating dependencies is critical for:
- Security: Applying patches to reduce exploitable vulnerabilities.
- Features: Accessing new functionalities.
- Bug Fixes: Benefiting from the latest corrections.
- Performance: Leveraging optimizations.
- Attractiveness: Using modern tech stacks (e.g., Node 20 vs. Node 8) to appeal to developers.
However, the process is tedious, repetitive, and complex due to transitive dependencies (e.g., a median of 683 for NPM projects) and cascading updates, where one update triggers others.
Automating with Dependabot and Renovate
Dependabot (acquired by GitHub) and Renovate (from Mend) address this by scanning project files (e.g., package.json
, Maven POM, Dockerfiles) and opening pull requests (PRs) or merge requests (MRs) for available updates. These tools:
- Check registries (NPM, Maven Central, Docker Hub) for new versions.
- Provide visibility into dependency status.
- Save time by automating version checks, especially in microservice setups.
- Enhance reactivity, critical for applying security patches quickly.
Setting Up the Tools
Dependabot: Configured via a dependabot.yml
file, specifying ecosystems (e.g., NPM), directories, and update schedules (e.g., weekly). On GitHub, it integrates natively via project settings. GitLab users can use a similar approach.
# dependabot.yml version: 2 updates: - package-ecosystem: "npm" directory: "/" schedule: interval: "weekly"
Renovate: Configured via a renovate.json
file, extending default presets. It supports GitHub and GitLab via apps or CI/CD pipelines (e.g., GitLab CI with a Docker image). For self-hosted setups, Renovate can run as a Docker container or Kubernetes CronJob.
# renovate.json { "extends": [ "config:recommended" ] }
In their demo, Jean-Philippe and Lise showcased Renovate on a GitLab project, using a .gitlab-ci.yml
pipeline to run Renovate on a schedule, creating MRs for updates like rxjs
(from 6.3.2 to 6.6.7).
Customizing Renovate
Renovate’s strength lies in its flexibility through presets and custom configurations:
- Presets: Predefined rules (e.g.,
npm:unpublishSafe
waits 3 days before proposing updates). Presets can extend others, forming a hierarchy (e.g.,config:recommended
extends base presets). - Custom Presets: Organizations can define reusable configs in a dedicated repository (e.g.,
renovate-config
) and apply them across projects.
// renovate-config/default.json { "extends": [ "config:recommended", ":npm" ] }
- Grouping Updates: Combine related updates (e.g., all ESLint packages) using
packageRules
or presets likegroup:recommendedLinters
to reduce PR noise.
{ "packageRules": [ { "matchPackagePatterns": ["^eslint"], "groupName": "eslint packages" } ] }
- Dependency Dashboard: An issue tracking open, rate-limited, or ignored MRs, activated via the
dependencyDashboard
field or preset.
Going Further: Automerge and Beyond
To streamline updates, Renovate supports automerge, automatically merging MRs if the pipeline passes, relying on robust tests. Options include:
automerge: true
for all updates.automergeType: "pr"
orstrategy
for specific behaviors.- Presets like
automerge:patch
for patch updates only.
The demo showed an automerged rxjs
update, triggering a new release (v1.2.1) via semantic-release, tagged, and deployed to Google Cloud Run. A failed Angular update (due to a major version gap) demonstrated how failing tests block automerge, ensuring safety.
Renovate can also update itself and its configuration (e.g., deprecated fields) via the config:migration
preset, creating MRs for self-updates.
Lessons Learned and Recommendations
From their experiences, Jean-Philippe and Lise shared key tips:
- Manage PR Overload: Limit concurrent PRs (e.g.,
prConcurrentLimit: 5
) and group related updates to reduce noise. - Use Schedules: Run Renovate at off-peak times (e.g., nightly) to avoid overloading CI runners and impacting production deployments.
- Ensure Robust Tests: Automerge relies on trustworthy tests; weak test coverage can lead to broken builds.
- Balance Frequency: Frequent runs catch updates quickly but risk conflicts; infrequent runs may miss critical patches.
- Monitor Resource Usage: Excessive pipelines can strain runners and increase costs in autoscaling environments (e.g., cloud platforms).
- Handle Transitive Dependencies: Renovate manages them like direct dependencies, but cascading updates require careful review.
- Support Diverse Ecosystems: Renovate works well with Java (e.g., Spring Boot, Quarkus), Scala, and NPM, with grouping to manage high-dependency ecosystems like NPM.
- Internal Repositories: Configure Renovate to scan private registries by specifying URLs.
- Major Updates: Use presets to stage major updates incrementally, avoiding risky automerge for breaking changes.
Takeaways
Jean-Philippe and Lise’s talk highlighted how Dependabot and Renovate transform dependency management from a chore to a streamlined process. Their demo and practical advice showed how Renovate’s flexibility—via presets, automerge, and dashboards—empowers teams to stay secure and up-to-date, especially in complex microservice environments. However, success requires careful configuration, robust testing, and resource management to avoid overwhelming teams or infrastructure. 🌟
[DevoxxUK2024] Project Leyden: Capturing Lightning in a Bottle by Per Minborg
Per Minborg, a seasoned member of Oracle’s Core Library team, delivered an insightful session at DevoxxUK2024, unveiling the ambitions of Project Leyden, a transformative initiative to enhance Java application performance. Focused on slashing startup time, accelerating warmup, and reducing memory footprint, Per’s talk explores how Java can evolve to meet modern demands while preserving its dynamic nature. By strategically shifting computations to optimize execution, Project Leyden introduces innovative techniques like condensers and enhanced Class Data Sharing (CDS). This session provides a roadmap for developers seeking to harness Java’s potential in high-performance environments, balancing flexibility with efficiency.
The Vision of Project Leyden
Per begins by outlining the core objectives of Project Leyden: improving startup time, warmup time, and memory footprint. Startup time, the duration from launching an application to its first meaningful output (e.g., a “Hello World” or serving a web request), is critical for user experience. Warmup time, the period until an application reaches peak performance through JIT compilation, can hinder responsiveness in dynamic systems. Footprint, encompassing memory and storage use, impacts scalability, especially in cloud environments. Per emphasizes that the best approach is to eliminate unnecessary computations, but when that’s not feasible, shifting them temporally—either earlier to compile time or later to runtime—can yield significant gains. This philosophy underpins Leyden’s strategy to refine Java’s execution model.
Shifting Computations for Efficiency
A cornerstone of Project Leyden is the concept of temporal computation shifting. Per explains that Java’s dynamic nature—encompassing dynamic class loading, JIT compilation, and runtime optimizations—enables expressive programming but can inflate startup and warmup times. By moving computations to build time, such as through constant folding or ahead-of-time (AOT) compilation, Leyden reduces runtime overhead. Alternatively, lazy evaluation postpones non-critical tasks, streamlining startup. Per introduces condensers, a novel mechanism that transforms program representations by shifting computations earlier, adding metadata, or imposing constraints on dynamism. Condensers are composable, meaning-preserving, and selectable, allowing developers to tailor optimizations based on application needs. For instance, a condenser might precompile lambda expressions into bytecode at build time, slashing runtime costs.
Enhancing Class Data Sharing (CDS)
Per delves into Class Data Sharing (CDS), a long-standing Java feature that Project Leyden enhances to achieve dramatic performance boosts. CDS allows pre-initialized JDK classes to be stored in a file, bypassing costly class loading during startup. With CDS++, Leyden extends this to include application classes, compiled code, and resolved constant pool references. Per shares compelling benchmarks: a test compiling 100 small Java files achieved a 2x startup improvement, while an XML parsing workload saw an 8x boost. For the Spring Pet Clinic benchmark, Leyden’s optimizations, including early class loading and cached compiled code, yielded up to 4x faster startup. These gains stem from a training run approach, where a representative execution gathers profiling data to inform optimizations, ensuring compatibility across platforms.
Balancing Dynamism and Performance
Java’s dynamism—encompassing dynamic typing, class loading, and reflection—empowers developers but complicates optimization. Per proposes selective constraints to balance this trade-off. For example, developers can restrict dynamic class loading for specific modules, enabling aggressive optimizations without sacrificing Java’s flexibility. The stable value feature, initially part of Leyden but now a standalone JEP, allows delayed initialization of final fields while maintaining performance akin to compile-time constants. Per illustrates this with a Fibonacci computation example, where memoization using stable values drastically reduces recursive overhead. By offering a “mixer board” of concessions, Leyden empowers developers to fine-tune performance, ensuring compatibility and preserving program semantics across diverse use cases.
Links:
[DevoxxGR2024] Small Steps Are the Fastest Way Forward: Navigating Chaos in Software Development
Sander Hoogendoorn, CTO at iBOOD, delivered an engaging and dynamic talk at Devoxx Greece 2024, addressing the challenges of software development in a rapidly changing world. Drawing from his extensive experience as a programmer, architect, and leader, Sander explored how organizations can overcome technical debt and the innovator’s dilemma by embracing continuous experimentation, small teams, and short delivery cycles. His narrative, peppered with real-world anecdotes, offered practical strategies for navigating complexity and fostering innovation in a post-agile landscape.
Understanding Technical Debt and Quality
Sander opened by tackling the elusive concept of software quality, contrasting it with tangible products like coffee or cars, where higher quality correlates with higher cost. In software, quality—encompassing maintainability, testability, and reliability—is harder to quantify and often lacks a direct price relationship. He introduced Ward Cunningham’s concept of technical debt, where initial shortcuts accelerate development but, if unaddressed, can cripple organizations. Sander shared an example from an insurance company with 18 million lines of COBOL and 12 million lines of Java, where outdated code and retiring developers created a maintenance nightmare. Similarly, at iBOOD, a patchwork of systems led to “technical death,” where maintenance consumed all resources, stifling innovation.
To mitigate technical debt, Sander advocated for continuous refactoring as part of daily work, rather than a separate task requiring approval. He emphasized finding a balance between quality and cost, tailored to the organization’s goals—whether building a quick mobile app or a long-lasting banking system.
The Innovator’s Dilemma and Continuous Renovation
Sander introduced the innovator’s dilemma, where successful products reach a saturation point, and new entrants with innovative technologies disrupt the market. He recounted his experience at a company that pioneered smart thermostats but failed to reinvent itself, leading to its acquisition and dissolution. To avoid this fate, organizations must operate in “continuous renovation mode,” maintaining existing systems while incrementally building new features. This approach, inspired by John Gall’s law—that complex systems evolve from simple, working ones—requires small, iterative steps rather than large-scale rebuilds.
At iBOOD, Sander implemented this by allocating 70% of resources to innovation and 30% to maintenance, ensuring the “shop stays open” while progressing toward strategic goals. He emphasized the importance of defining a clear “dot on the horizon,” such as iBOOD’s ambition to become Europe’s leading deal site, to guide these efforts.
Navigating Complexity with the Cynefin Framework
To navigate the chaotic and complex nature of modern software development, Sander introduced the Cynefin framework, which categorizes problems into clear, complicated, complex, and chaotic zones. Most software projects reside in the complex zone, where no best practices exist, and experimentation is essential. He cautioned against treating complex problems as complicated, citing failed attempts at iBOOD’s insurance client to rebuild systems from scratch. Instead, organizations should run small experiments, accepting the risk of failure as a path to learning.
Sander illustrated this with iBOOD’s decision-making process, where a cross-functional team evaluates ideas based on their alignment with strategic goals, feasibility, and size. Ideas too large are broken into smaller pieces, ensuring manageable experiments that deliver quick feedback.
Delivering Features in Short Cycles
Sander argued that traditional project-based approaches and even Scrum’s sprint model are outdated in a world demanding rapid iteration. He advocated for continuous delivery, where features are deployed multiple times daily, minimizing dependencies and enabling immediate feedback. At iBOOD, features are released in basic versions, refined based on business input, and prioritized over less critical tasks. This approach, supported by automated CI/CD pipelines and extensive testing, ensures quality is built into the process, reducing reliance on manual inspections.
He shared iBOOD’s pipeline, which includes unit tests, static code analysis, and production testing, allowing developers to code with confidence. By breaking features into small, independent services, iBOOD achieves flexibility and resilience, avoiding the pitfalls of monolithic systems.
Empowering Autonomous Micro-Teams
Finally, Sander addressed the human element of software development, arguing that the team, not the individual, is the smallest unit of delivery. He advocated for autonomous “micro-teams” that self-organize around tasks, drawing an analogy to jazz ensembles where musicians form sub-groups based on skills. At iBOOD, developers choose their tasks and collaborators, fostering learning and flexibility. This autonomy, while initially uncomfortable for some, encourages ownership and innovation.
Sander emphasized minimizing rules to promote critical thinking, citing an Amsterdam experiment where removing traffic signs improved road safety through communication. By eliminating Scrum rituals like sprints and retrospectives, iBOOD’s teams focus on solving one problem daily, enhancing efficiency and morale.
Conclusion
Sander Hoogendoorn’s talk at Devoxx Greece 2024 offered a refreshing perspective on thriving in software development’s chaotic landscape. By addressing technical debt, embracing the innovator’s dilemma, and leveraging the Cynefin framework, organizations can navigate complexity through small, experimental steps. Continuous delivery and autonomous micro-teams further empower teams to innovate rapidly and sustainably. Sander’s practical insights, grounded in his leadership at iBOOD, provide a compelling blueprint for organizations seeking to evolve in a post-agile world.
Links:
[DevoxxFR 2024] Debugging Your Salary: Winning Strategies for Successful Negotiation
At Devoxx France 2024, Shirley Almosni Chiche, an independent IT recruiter and career agent, delivered a dynamic session titled “Debuggez votre salaire ! Mes stratégies gagnantes pour réussir sa négociation salariale.” With over a decade of recruitment experience, Shirley unpacked the complexities of salary negotiation, offering actionable strategies to overcome common obstacles. Through humor, personas, and real-world insights, she empowered developers to approach salary discussions with confidence and preparation, transforming a daunting process into a strategic opportunity.
Navigating the Salary Minefield
Shirley opened with a candid acknowledgment: salary discussions are fraught with tension, myths, and frustrations. Drawing from her role at Build RH, her recruitment firm, she likened salary negotiation to a high-stakes race, where candidates endure lengthy recruitment processes only to face disappointing offers. Common employer excuses—“we must follow the salary grid,” “we can’t pay more than existing staff,” or “the budget is tight”—often derail negotiations, leaving candidates feeling undervalued.
To frame her approach, Shirley introduced six “bugs” that justify low salaries, each paired with a persona representing typical employer archetypes. These included the rigid “Big Corp” manager enforcing salary grids, the team-focused “Didier Deschamps” avoiding pay disparities, and the budget-conscious “François Damiens” citing financial constraints. Other personas, like the overly technical “Elon” scrutinizing code, the relentless negotiator “Patrick,” and the discriminatory “Hubert,” highlighted diverse challenges candidates face.
Shirley shared market insights, noting a 2023–2024 tech slowdown with 200,000 global layoffs, reduced venture funding, and a shift toward cost-conscious industries like banking and retail. This context, she argued, demands strategic preparation to secure fair compensation.
Countering the Bugs: Tactical Responses
For each bug, Shirley offered counter-arguments rooted in empathy and alignment with employer priorities. Against the salary grid, she advised exploring non-salary benefits like profit-sharing or PERCO plans, common in large firms. Using a “mirror empathy” tactic, candidates can frame salary needs in the employer’s language—e.g., linking pay to productivity. Challenging outdated grids by highlighting market research or internal surveys also strengthens arguments.
For the “Didier Deschamps” persona, Shirley suggested emphasizing unique skills (e.g., full-stack expertise in a backend-heavy team) to justify higher pay without disrupting team cohesion. Proposing contributions like speaking at conferences or aiding recruitment can further demonstrate value. She shared a success story where a candidate engaged the team directly, securing a better offer through collective dialogue.
When facing “François Damiens” and financial constraints, Shirley recommended focusing on risk mitigation. For startups, candidates can negotiate stock options or bonuses, arguing that their expertise accelerates product delivery, saving recruitment costs. Highlighting polyvalence—combining skills like development, data, and security—positions candidates as multi-role assets, justifying premium pay.
For technical critiques from “Elon,” Shirley urged immediate feedback post-interview to address perceived weaknesses. If gaps exist, candidates should negotiate training opportunities to ensure long-term fit. Pointing out evaluation mismatches (e.g., testing frontend skills for a backend role) can redirect discussions to relevant strengths.
Against “Patrick,” the negotiator, Shirley advised setting firm boundaries—two rounds of negotiation max—to avoid endless haggling. Highlighting project flaws tactfully and aligning expertise with business goals can shift the dynamic from adversarial to collaborative.
Addressing Discrimination: A Sobering Reality
Shirley tackled the “Hubert” persona, representing discriminatory practices, with nuance. Beyond gender pay gaps, she highlighted biases against older candidates, neurodivergent individuals, those with disabilities, and career switchers. Citing her mother’s experience as a Maghrebi woman facing a 20% pay cut, Shirley acknowledged the harsh realities for marginalized groups.
Rather than dismissing discriminatory offers outright, she advised viewing them as career stepping stones. Candidates can leverage such roles for training or experience, using “mirror empathy” to negotiate non-salary benefits like remote work or learning opportunities. While acknowledging privilege, Shirley urged resilience, encouraging candidates to “lend an ear to learning” and rebound from setbacks.
Mastering Preparation: Anticipating the Negotiation
Shirley emphasized proactive preparation as the cornerstone of successful negotiation. Understanding one’s relationship with money—shaped by upbringing, traumas, or social pressures—is critical. Some candidates undervalue themselves due to impostor syndrome, while others see salary as a status symbol or family lifeline. Recognizing these drivers informs negotiation strategies.
She outlined key preparation steps:
- Job Selection: Target roles within your expertise and in high-paying sectors (e.g., cloud, security) for better leverage. Data roles can yield 7–13% salary gains.
- Market Research: Use resources like Choose Your Boss or APEC barometers to benchmark salaries. Shirley noted Île-de-France salaries exceed regional ones by 10–15K, with a 70K ceiling for seniors in 2023.
- Company Analysis: Assess financial health via LinkedIn or job ad longevity. Long-posted roles signal negotiation flexibility.
- Recruiter Engagement: Treat initial recruiter calls as data-gathering opportunities, probing team culture, hiring urgency, and technical expectations.
- Value Proposition: Highlight impact—product roadmaps, technical migrations, or team mentoring—early in interviews to set a premium tone.
Shirley cautioned against oversharing personal financial details (e.g., current salary or expenses) during salary discussions. Instead, provide a specific range (e.g., “around 72K”) based on market data and role demands. Mentioning parallel offers tactfully can spur employers to act swiftly.
Sealing the Deal: Confidence and Coherence
In the final negotiation phase, Shirley advised a 48-hour reflection period after receiving an offer, consulting trusted peers for perspective. Counteroffers should be fact-based, reiterating interview insights and using empathetic language. Timing matters—avoid Mondays or late Fridays for discussions.
Citing APEC data, Shirley noted that 80% of executives who negotiate are satisfied, with 65% securing their target salary or higher. She urged candidates to remain consistent, avoiding last-minute demands that erode trust. Beyond salary, consider workplace culture, inclusion, and work-life balance to ensure long-term fit.
Shirley closed with a rallying call: don’t undervalue your skills or settle for less. By blending preparation, empathy, and resilience, candidates can debug their salary negotiations and secure rewarding outcomes.
Links:
Hashtags: #SalaryNegotiation #DevoxxFrance #CareerDevelopment #TechRecruitment
[PyConUS 2024] How Python Harnesses Rust through PyO3
David Hewitt, a key contributor to the PyO3 library, delivered a comprehensive session at PyConUS 2024, unraveling the mechanics of integrating Rust with Python. As a Python developer for over a decade and a lead maintainer of PyO3, David provided a detailed exploration of how Rust’s power enhances Python’s ecosystem, focusing on PyO3’s role in bridging the two languages. His talk traced the journey of a Python function call to Rust code, offering insights into performance, security, and concurrency, while remaining accessible to those unfamiliar with Rust.
Why Rust in Python?
David began by outlining the motivations for combining Rust with Python, emphasizing Rust’s reliability, performance, and security. Unlike Python, where exceptions can arise unexpectedly, Rust’s structured error handling via pattern matching ensures predictable behavior, reducing debugging challenges. Performance-wise, Rust’s compiled nature offers significant speedups, as seen in libraries like Pydantic, Polars, and Ruff. David highlighted Rust’s security advantages, noting its memory safety features prevent common vulnerabilities found in C or C++, making it a preferred choice for companies like Microsoft and Google. Additionally, Rust’s concurrency model avoids data races, aligning well with Python’s evolving threading capabilities, such as sub-interpreters and free-threading in Python 3.13.
PyO3: Bridging Python and Rust
Central to David’s talk was PyO3, a Rust library that facilitates seamless integration with Python. PyO3 allows developers to write Rust code that runs within a Python program or vice versa, using procedural macros to generate Python-compatible modules. David explained how tools like Maturin and setup-tools-rust simplify project setup, enabling developers to compile Rust code into native libraries that Python imports like standard modules. He emphasized PyO3’s goal of maintaining a low barrier to entry, with comprehensive documentation and a developer guide to assist Python programmers venturing into Rust, ensuring a smooth transition across languages.
Tracing a Function Call
David took the audience on a technical journey, tracing a Python function call through PyO3 to Rust code. Using a simple word-counting function as an example, he showed how a Rust implementation, marked with PyO3’s @pyfunction
attribute, mirrors Python’s structure while offering performance gains of 2–4x. He dissected the Python interpreter’s bytecode, revealing how the CALL
instruction invokes PyObject_Vectorcall
, which resolves to a Rust function pointer via PyO3’s generated code. This “trampoline” handles critical safety measures, such as preventing Rust panics from crashing the Python interpreter and managing the Global Interpreter Lock (GIL) for safe concurrency. David’s step-by-step breakdown clarified how arguments are passed and converted, ensuring seamless execution.
Future of Rust in Python’s Ecosystem
Concluding, David reflected on Rust’s growing adoption in Python, citing over 350 projects monthly uploading Rust code to PyPI, with downloads exceeding 3 billion annually. He predicted that Rust could rival C/C++ in the Python ecosystem within 2–4 years, driven by its reliability and performance. Addressing concurrency, David discussed how PyO3 could adapt to Python’s sub-interpreters and free-threading, potentially enforcing immutability to simplify multithreaded interactions. His vision for PyO3 is to enhance Python’s strengths without replacing it, fostering a symbiotic relationship that empowers developers to leverage Rust’s precision where needed.
Links:
Hashtags: #Rust #PyO3 #Python #Performance #Security #PyConUS2024 #DavidHewitt #Pydantic #Polars #Ruff
[DevoxxUK2024] Productivity is Messing Around and Having Fun by Trisha Gee & Holly Cummins
In their DevoxxUK2024 talk, Trisha Gee (Gradle) and Holly Cummins (Red Hat, Quarkus) explore developer productivity through the lens of joy and play, challenging conventional metrics like lines of code. They argue that developer satisfaction drives business success, drawing on Fred Brooks’ The Mythical Man-Month to highlight why programmers enjoy crafting, solving puzzles, and learning. However, they note that developers spend only ~32% of their time coding, with the rest consumed by toil (e.g., waiting for builds, context-switching).
The speakers critique metrics like lines of code, citing examples where incentivizing code volume led to bloated, unmaintainable codebases (e.g., ASCII art comments). They warn against AI tools like Copilot generating verbose, unnecessary code (e.g., redundant getters/setters in Quarkus), which increases technical debt. Instead, they advocate for frameworks like Quarkus that reduce boilerplate through build-time bytecode inspection, enabling concise, expressive code.
Trisha and Holly introduce the SPACE framework (Satisfaction, Performance, Activity, Communication, Efficiency) as a holistic approach to measuring productivity, emphasizing developer well-being and flow over raw output. They highlight the importance of mental space for creativity, citing the brain’s default mode network, activated during low-stimulation activities like showering, running, or knitting. They encourage embracing “boredom” and play, supported by research showing happier developers are more productive. The talk critiques flawed metrics (e.g., McKinsey’s) and warns against management misconceptions, like assuming developers are replaceable by AI.
[DevoxxFR 2024] Staff Engineer: A Vital Role in Technical Leadership
My former estimated colleague François Nollen, a technical expert at SNCF Connect & Tech, delivered an engaging talk at Devoxx France 2024 on the role of the Staff Engineer. Often overshadowed by the more familiar Engineering Manager position, the Staff Engineer role is gaining traction as a critical path for technical leadership without management responsibilities. François shared his journey and insights into how Staff Engineers operate at SNCF Connect, offering a blueprint for developers aspiring to influence organizations at scale. This post explores the role’s responsibilities, its impact, and its relevance in modern tech organizations.
Defining the Staff Engineer Role
The Staff Engineer role, rooted in Silicon Valley’s tech giants, represents a senior technical contributor who drives impact across multiple teams without managing them directly. François described Staff Engineers as versatile problem-solvers, blending deep technical expertise with strong collaboration skills. Unlike Engineering Managers, who focus on team management, Staff Engineers tackle complex technical challenges, set standards, and foster innovation. At SNCF Connect, they are called “Technical Expertise Referents,” reflecting their role in guiding technical strategy and mentoring teams.
A Day in the Life
Staff Engineers at SNCF Connect enjoy significant autonomy, with no fixed daily tasks. François outlined a typical day, which begins with monitoring communication channels like Slack to identify team challenges. They contribute code, conduct reviews, and drive strategic initiatives, such as defining best practices or evaluating technical risks. Unlike team-bound developers, Staff Engineers operate at an organizational level, collaborating with engineering, HR, and communication teams to align technical and business goals. This broad scope requires a balance of technical depth and interpersonal finesse.
Impact and Collaboration
The influence of a Staff Engineer stems from their expertise and ability to inspire trust, not formal authority. François highlighted their role in unblocking teams, accelerating projects, and shaping technical strategy alongside Principal Engineers. At SNCF Connect, Staff Engineers work as a collective, amplifying their impact on cross-cutting initiatives like DevOps and continuous delivery. This collaborative approach contrasts with traditional roles like architects, who may be disconnected from delivery, making Staff Engineers integral to dynamic, agile environments.
Is It Right for You?
François posed a reflective question: is the Staff Engineer role suited for everyone? It demands extensive technical experience, organizational awareness, and strong communication skills. Developers who thrive on solving complex problems, mentoring others, and driving systemic change without managing teams may find this path rewarding. For organizations, Staff Engineers offer a framework to retain and empower experienced developers, avoiding the pitfalls of promoting them into unsuitable management roles, as per the Peter Principle.
Links:
Hashtags: #StaffEngineer #TechnicalLeadership #DevoxxFrance #FrançoisNollen #SNCFConnect #Engineering #Agile
[DevoxxFR 2024] Going AOT: Mastering GraalVM for Java Applications
Alina Yurenko 🇺🇦 , a developer advocate at Oracle Labs, captivated audiences at Devoxx France 2024 with her deep dive into GraalVM’s ahead-of-time (AOT) compilation for Java applications. With a passion for open-source and community engagement, Alina explored how GraalVM’s Native Image transforms Java applications into compact, high-performance native executables, ideal for cloud environments. Through demos and practical guidance, she addressed building, testing, and optimizing GraalVM applications, debunking myths and showcasing its potential. This post unpacks Alina’s insights, offering a roadmap for adopting GraalVM in production.
GraalVM and Native Image Fundamentals
Alina introduced GraalVM as both a high-performance JDK and a platform for AOT compilation via Native Image. Unlike traditional JVMs, GraalVM allows developers to run Java applications conventionally or compile them into standalone native executables that don’t require a JVM at runtime. This dual capability, built on over a decade of research at Oracle Labs, offers Java’s developer productivity alongside native performance benefits like faster startup and lower resource usage. Native Image, GA since 2019, analyzes an application’s bytecode at build time, identifying reachable code and dependencies to produce a compact executable, eliminating unused code and pre-populating the heap for instant startup.
The closed-world assumption underpins this process: all application behavior must be known at build time, unlike the JVM’s dynamic runtime optimizations. This enables aggressive optimizations but requires careful handling of dynamic features like reflection. Alina demonstrated this with a Spring Boot application, which started in 1.3 seconds on GraalVM’s JVM but just 47 milliseconds as a native executable, highlighting its suitability for serverless and microservices where startup speed is critical.
Benefits Beyond Startup Speed
While fast startup is a hallmark of Native Image, Alina emphasized its broader advantages, especially for long-running applications. By shifting compilation, class loading, and optimization to build time, Native Image reduces runtime CPU and memory usage, offering predictable performance without the JVM’s warm-up phase. A Spring Pet Clinic benchmark showed Native Image matching or slightly surpassing the JVM’s C2 compiler in peak throughput, a testament to two years of optimization efforts. For memory-constrained environments, Native Image excels, delivering up to 2–3x higher throughput per memory unit at heap sizes of 512MB to 1GB, as seen in throughput density charts.
Security is another benefit. By excluding unused code, Native Image reduces the attack surface, and dynamic features like reflection require explicit allow-lists, enhancing control. Alina also noted compatibility with modern Java frameworks like Spring Boot, Micronaut, and Quarkus, which integrate Native Image support, and a community-maintained list of compatible libraries on the GraalVM website, ensuring broad ecosystem support.
Building and Testing GraalVM Applications
Alina provided a practical guide for building and testing GraalVM applications. Using a Spring Boot demo, she showcased the Native Maven plugin, which streamlines compilation. The build process, while resource-intensive for large applications, typically stays within 2GB of memory for smaller apps, making it viable on CI/CD systems like GitHub Actions. She recommended developing and testing on the JVM, compiling to Native Image only when adding dependencies or in CI/CD pipelines, to balance efficiency and validation.
Dynamic features like reflection pose challenges, but Alina outlined solutions: predictable reflection works out-of-the-box, while complex cases may require JSON configuration files, often provided by frameworks or libraries like H2. A centralized GitHub repository hosts configs for popular libraries, and a tracing agent can generate configs automatically by running the app on the JVM. Testing support is robust, with JUnit and framework-specific tools like Micronaut’s test resources enabling integration tests in Native mode, often leveraging Testcontainers.
Optimizing and Future Directions
To achieve peak performance, Alina recommended profile-guided optimizations (PGO), where an instrumented executable collects runtime profiles to inform a final build, combining AOT’s predictability with JVM-like insights. A built-in ML model predicts profiles for simpler scenarios, offering 6–8% performance gains. Other optimizations include using the G1 garbage collector, enabling machine-specific flags, or building static images for minimal container sizes with distroless images.
Looking ahead, Alina highlighted two ambitious GraalVM projects: Layered Native Images, which pre-compile base images (e.g., JDK or Spring) to reduce build times and resource usage, and GraalOS, a platform for deploying native images without containers, eliminating container overhead. Demos of a LangChain for Java app and a GitHub crawler using Java 22 features showcased GraalVM’s versatility, running seamlessly as native executables. Alina’s session underscored GraalVM’s transformative potential, urging developers to explore its capabilities for modern Java applications.
Links:
Hashtags: #GraalVM #NativeImage #Java #AOT #AlinaYurenko #DevoxxFR2024
[DevoxxFR 2024] Super Tech’Rex World: The Assembler Strikes Back
Nicolas Grohmann, a developer at Sopra Steria, took attendees on a nostalgic journey through low-level programming with his talk, “Super Tech’Rex World: The Assembler Strikes Back.” Over five years, Nicolas modified Super Mario World, a 1990 Super Nintendo Entertainment System (SNES) game coded in assembler, transforming it into a custom adventure featuring a dinosaur named T-Rex. Through live coding and engaging storytelling, he demystified assembler, revealing its principles and practical applications. His session illuminated the inner workings of 1990s consoles while showcasing assembler’s relevance to modern computing.
A Retro Quest Begins
Nicolas opened with a personal anecdote, recounting how his project began in 2018, before Sopra Steria’s Tech Me Up community formed in 2021. He described this period as the “Stone Age” of his journey, marked by trial and error. His goal was to hack Super Mario World, a beloved SNES title, replacing Mario with T-Rex, coins with pixels (a Sopra Steria internal currency), and mushrooms with certifications that boost strength. Enemies became “pirates,” symbolizing digital adversaries.
To set the stage, Nicolas showcased the SNES, a 1990s console with a CPU, ROM, and RAM—components familiar to modern developers. He launched an emulator to demonstrate Super Mario World, highlighting its mechanics: jumping, collecting items, and battling enemies. A modified ROM revealed his custom version, where T-Rex navigated a reimagined world. This demo captivated the audience, blending nostalgia with technical ambition.
For the first two years, Nicolas relied on community tools to tweak graphics and levels, such as replacing Mario’s sprite with T-Rex. However, as a developer, he yearned to contribute original code, prompting him to learn assembler. This shift marked the “Age of Discoveries,” where he tackled the language’s core concepts: machine code, registers, and memory addressing.
Decoding Assembler’s Foundations
Nicolas introduced assembler’s essentials, starting with machine code, the binary language of 0s and 1s that CPUs understand. Grouped into 8-bit bytes (octets), a SNES ROM comprises 1–4 megabytes of such code. He clarified binary and hexadecimal systems, noting that hexadecimal (0–9, A–F) compacts binary for readability. For example, 15 in decimal is 1111 in binary and 0F in hexadecimal, while 255 (all 1s in a byte) is FF.
Next, he explored registers, small memory locations within the CPU, akin to global variables. The accumulator, a key register, stores a single octet for operations, while the program counter tracks the next instruction’s address. These registers enable precise control over a program’s execution.
Memory addressing, Nicolas’s favorite concept, likens SNES memory to a city. Each octet resides in a “house” (address 00–FF), within a “street” (page 00–FF), in a “neighborhood” (bank 00–FF). This structure yields 16 megabytes of addressable memory. Addressing modes—long (full address), absolute (bank preset), and direct page (bank and page preset)—optimize code efficiency. Direct page, limited to 256 addresses, is ideal for game variables, streamlining operations.
Assembler, Nicolas clarified, isn’t a single language but a family of instruction sets tailored to CPU types. Opcodes, mnemonic instructions like LDA (load accumulator) and STA (store accumulator), translate to machine code (e.g., LDA becomes A5 for direct page). These opcodes, combined with addressing modes, form the backbone of assembler programming.
Live Coding: Empowering T-Rex
Nicolas transitioned to live coding, demonstrating assembler’s practical application. His goal: make T-Rex invincible and alter gameplay to challenge pirates. Using Super Mario World’s memory map, a community-curated resource, he targeted address 7E0019, which tracks the player’s state (0 for small, 1 for large). By writing LDA #$01
(load 1) and STA $19
(store to 7E0019), he ensured T-Rex remained large, immune to damage. The #
denotes an immediate value, distinguishing it from an address.
To nerf T-Rex’s jump, Nicolas manipulated controller inputs at addresses 7E0015 and 7E0016, which store button states as bitmasks (e.g., the leftmost bit for button B, used for jumping). Using LDA $15
and AND #$7F
(bitwise AND with 01111111), he cleared the B button’s bit, disabling jumps while preserving other controls. He applied this to both addresses, ensuring consistency.
To restore button B for firing projectiles, Nicolas used 7E0016, which flags buttons pressed in a single frame. With LDA $16
, AND #$80
(isolating B’s bit), and BEQ
(branch if zero to skip firing), he ensured projectiles spawned only on B’s press. A JSL
(jump to subroutine long) invoked a community routine to spawn a custom sprite—a projectile that moves right and destroys enemies.
These demos showcased assembler’s precision, leveraging memory maps and opcodes to reshape gameplay. Nicolas’s iterative approach—testing, tweaking, and re-running—mirrored real-world debugging.
Mastering the Craft: Hooks and the Stack
Reflecting on 2021, the “Modern Age,” Nicolas shared how he mastered code insertion. Since modifying Super Mario World’s original ROM risks corruption, he used hooks—redirects to free memory spaces. A tool inserts custom code at an address like $A00, replacing a segment (e.g., four octets) with a JSL
(jump subroutine long) to a hook. The hook preserves original code, jumps to the custom code via JML
(jump long), and returns with RTL
(return long), seamlessly integrating modifications.
The stack, a RAM region for temporary data, proved crucial. Managed by a stack pointer register, it supports opcodes like PHA
(push accumulator) and PLA
(pull accumulator). JSL
pushes the return address before jumping, and RTL
pops it, ensuring correct returns. This mechanism enabled complex routines without disrupting the game’s flow.
Nicolas introduced index registers X and Y, which support opcodes like LDX
and STX
. Indexed addressing (e.g., LDA $00,X
) adds X’s value to an address, enabling dynamic memory access. For example, setting X to 2 and using LDA $00,X
accesses address $02.
Conquering the Game and Beyond
In a final demo, Nicolas teleported T-Rex to the game’s credits by checking sprite states. Address 7E14C8 and the next 11 addresses track 12 sprite slots (0 for empty). Using X as a counter, he looped through LDA $14C8,X
, branching with BNE
(branch if not zero) if a sprite exists, or decrementing X with DEX
and looping with BPL
(branch if positive). If all slots are empty, a JSR
(jump subroutine) triggers the credits, ending the game.
Nicolas concluded with reflections on his five-year journey, likening assembler to a steep but rewarding climb. His game, nearing release on the Super Mario World hacking community’s site, features space battles and a 3D boss, pushing SNES limits. He urged developers to embrace challenging learning paths, emphasizing that persistence yields profound satisfaction.
Links:
Hashtags: #Assembler #DevoxxFrance #SuperNintendo #RetroGaming #SopraSteria #LowLevelProgramming
[DevoxxGR2024] Devoxx Greece 2024 Sustainability Chronicles: Innovate Through Green Technology With Kepler and KEDA
At Devoxx Greece 2024, Katie Gamanji, a senior field engineer at Apple and a technical oversight committee member for the Cloud Native Computing Foundation (CNCF), delivered a compelling presentation on advancing environmental sustainability within the cloud-native ecosystem. With Kubernetes celebrating its tenth anniversary, Katie emphasized the urgent need for technologists to integrate green practices into their infrastructure strategies. Her talk explored how tools like Kepler and KEDA’s carbon-aware operator enable practitioners to measure and mitigate carbon emissions, while fostering a vibrant, inclusive community to drive these efforts forward. Drawing from her extensive experience and leadership in the CNCF, Katie provided a roadmap for aligning technological innovation with climate responsibility.
The Imperative of Cloud Sustainability
Katie began by underscoring the critical role of sustainability in the tech sector, particularly given the industry’s contribution to global greenhouse gas emissions. She highlighted that the tech sector accounts for 1.4% of global emissions, a figure that could soar to 10% within a decade without intervention. However, by leveraging renewable energy, emissions could be reduced by up to 80%. International agreements like COP21 and the United Nations’ Sustainable Development Goals (SDGs) have spurred national regulations, compelling organizations to assess and report their carbon footprints. Major cloud providers, such as Google Cloud Platform (GCP), have set ambitious net-zero targets, with GCP already operating on renewable energy since 2022. Yet, Katie stressed that sustainability cannot be outsourced solely to cloud providers; organizations must embed these principles internally.
The emergence of “GreenOps,” inspired by FinOps, encapsulates the processes, tools, and cultural shifts needed to achieve digital sustainability. By optimizing infrastructure—through strategies like using spot instances or serverless architectures—organizations can reduce both costs and emissions. Katie introduced a four-phase strategy proposed by the FinOps Foundation’s Environmental Sustainability Working Group: awareness, discovery, roadmap, and execution. This framework encourages organizations to educate stakeholders, benchmark emissions, implement automated tools, and iteratively pursue ambitious sustainability goals.
Measuring Emissions with Kepler
To address emissions within Kubernetes clusters, Katie introduced Kepler, a CNCF sandbox project developed by Red Hat and IBM. Kepler, a Kubernetes Efficient Power Level Exporter, utilizes eBPF to probe system statistics and export power consumption metrics to Prometheus for visualization in tools like Grafana. Deployed as a daemon set, Kepler collects node- and container-level metrics, focusing on power usage and resource utilization. By tracing CPU performance counters and Linux kernel trace points, it calculates energy consumption in joules, converting this to kilowatt-hours and multiplying by region-specific emission factors for gases like coal, petroleum, and natural gas.
Katie demonstrated Kepler’s practical application using a Grafana dashboard, which displayed emissions per gas and allowed granular analysis by container, day, or namespace. This visibility enables organizations to identify high-emission components, such as during traffic spikes, and optimize accordingly. As a sandbox project, Kepler is gaining momentum, and Katie encouraged attendees to explore it, provide feedback, or contribute to its development, reinforcing its potential to establish a baseline for carbon accounting in cloud-native environments.
Scaling Sustainably with KEDA’s Carbon-Aware Operator
Complementing Kepler’s observational capabilities, Katie introduced KEDA (Kubernetes Event-Driven Autoscaler), a graduated CNCF project, and its carbon-aware operator. KEDA, created by Microsoft and Red Hat, scales applications based on external events, offering a rich catalog of triggers. The carbon-aware operator optimizes emissions by scaling applications according to carbon intensity—grams of CO2 equivalent emitted per kilowatt-hour consumed. In scenarios where infrastructure is powered by renewable sources like solar or wind, carbon intensity approaches zero, allowing for maximum application replicas. Conversely, high carbon intensity, such as from coal-based energy, prompts scaling down to minimize emissions.
Katie illustrated this with a custom resource definition (CRD) that configures scaling behavior based on carbon intensity forecasts from providers like WattTime or Electricity Maps. In her demo, a Grafana dashboard showed an application scaling from 15 replicas at a carbon intensity of 530 to a single replica at 580, dynamically responding to grid data. This proactive approach ensures sustainability is embedded in scheduling decisions, aligning resource usage with environmental impact.
Nurturing a Sustainable Community
Beyond technology, Katie emphasized the pivotal role of the Kubernetes community in driving sustainability. Operating on principles of inclusivity, open governance, and transparency, the community fosters innovation through technical advisory groups (TAGs) focused on domains like observability, security, and environmental sustainability. The TAG Environmental Sustainability, established just over a year ago, aims to benchmark emissions across graduated CNCF projects, raising awareness and encouraging greener practices.
To sustain this momentum, Katie highlighted the need for education and upskilling. Resources like the Kubernetes and Cloud Native Associate (KCNA) certification and her own Cloud Native Fundamentals course on Udacity lower entry barriers for newcomers. By diversifying technical and governing boards, the community can continue to evolve, ensuring it scales alongside technological advancements. Katie’s vision is a cloud-native ecosystem where innovation and sustainability coexist, supported by a nurturing, inclusive community.
Conclusion
Katie Gamanji’s presentation at Devoxx Greece 2024 was a clarion call for technologists to prioritize environmental sustainability. By leveraging tools like Kepler and KEDA’s carbon-aware operator, practitioners can measure and mitigate emissions within Kubernetes clusters, aligning infrastructure with climate goals. Equally important is the community’s role in fostering education, inclusivity, and collaboration to sustain these efforts. Katie’s insights, grounded in her leadership at Apple and the CNCF, offer a blueprint for innovating through green technology while building a resilient, forward-thinking ecosystem.