Recent Posts
Archives

PostHeaderIcon [PyConUS 2024] Demystifying Python Decorators: A Comprehensive Tutorial

At PyConUS2024, Reuven M. Lerner, an esteemed independent trainer and consultant from Lerner Consulting, presented an exhaustive tutorial titled “All About Decorators.” This session aimed to strip away the perceived complexity surrounding Python’s decorators, revealing their inherent power and versatility. Reuven’s approach was to guide attendees through the fundamental principles, practical applications, and advanced techniques of decorators, empowering developers to leverage this elegant feature for cleaner, more maintainable code. The tutorial offered a deep dive into what decorators are, their internal mechanics, how to construct them, and when to employ them effectively in various programming scenarios.

Functions as First-Class Citizens: The Foundation of Decorators

At the heart of Python’s decorator mechanism lies the concept of functions as first-class objects. Reuven Lerner began by elucidating this foundational principle, demonstrating how functions in Python are not merely blocks of code but entities that can be assigned to variables, passed as arguments to other functions, and returned as values from functions. This flexibility is pivotal, as it allows for the dynamic manipulation and extension of code behavior without altering the original function definition.

He illustrated this with simple examples, such as wrapping print statements with additional lines of text. Initially, this might involve manually calling a “wrapper” function that takes another function as an argument. This manual wrapping, while functional, quickly becomes cumbersome when applied repeatedly across numerous functions. Reuven showed how this initial approach, though verbose, laid the groundwork for understanding the more sophisticated decorator syntax. The ability to treat functions like any other data type in Python empowers developers to create highly modular and adaptable code structures, a cornerstone for building robust and scalable applications.

The Power of Closures: Functions Returning Functions

Building upon the concept of first-class functions, Reuven delved into the powerful notion of closures. A closure is a function that remembers the environment in which it was created, even after the outer function has finished executing. This is achieved when an inner function is defined within an outer function, and the outer function returns this inner function. The inner function retains access to the outer function’s local variables, forming a “closure” over that environment.

Lerner’s explanations made it clear that closures are a critical stepping stone to understanding how decorators work. The decorator pattern fundamentally relies on an outer function (the decorator) that takes a function as input, defines an inner “wrapper” function, and then returns this wrapper. This wrapper function “closes over” the original function and any variables from the decorator’s scope, allowing it to execute the original function while adding pre- or post-processing logic. This concept is essential for functions that need to maintain state or access context from their creation environment, paving the way for more sophisticated decorator implementations.

Implementing the Decorator Pattern Manually

Before introducing Python’s syntactic sugar for decorators, Reuven walked attendees through the manual implementation of the decorator pattern. This hands-on exercise was crucial for demystifying the @ syntax and showing precisely what happens under the hood. The manual approach involves explicitly defining a “decorator function” that accepts another function (the “decorated function”) as an argument. Inside the decorator function, a new “wrapper function” is defined. This wrapper function contains the additional logic to be executed before or after the decorated function, and it also calls the decorated function. Finally, the decorator function returns this wrapper.

The key step, as Reuven demonstrated, is then reassigning the original function’s name to the returned wrapper function. For instance, my_function = decorator(my_function). This reassignment effectively replaces the original my_function with the new, enhanced wrapper function, without changing how my_function is called elsewhere in the code. This explicit, step-by-step construction revealed the modularity and power of decorators, highlighting how they can seamlessly inject new behavior into existing functions while preserving their interfaces. Understanding this manual process is fundamental to debugging and truly mastering decorator usage.

Python’s Syntactic Sugar: The @ Operator

Once the manual mechanics of decorators were firmly established, Reuven introduced Python’s elegant and widely adopted @ syntax. This syntactic sugar simplifies the application of decorators significantly, making code more readable and concise. Instead of the explicit reassignment, my_function = decorator(my_function), the @ symbol allows developers to place the decorator name directly above the function definition:

@decorator
def my_function():
#### ...

Lerner emphasized that this @ notation is merely a convenience for the manual wrapping process discussed earlier. It performs the exact same operation of passing my_function to decorator and reassigning the result back to my_function. This clarity was vital, as many developers initially find the @ syntax magical. Reuven illustrated how this streamlined syntax enhances code readability, especially when multiple decorators are applied to a single function, or when creating custom decorators for specific tasks. The @ operator makes decorators a powerful and expressive tool in the Python developer’s toolkit, promoting a clean separation of concerns and encouraging reusable code patterns.

Practical Applications of Decorators

The tutorial progressed into a series of practical examples, showcasing the diverse utility of decorators in real-world scenarios. Reuven presented various use cases, from simple enhancements to more complex functionalities:

  • “Shouter” Decorator: A classic example where a decorator modifies the output of a function, perhaps by converting it to uppercase or adding exclamation marks. This demonstrates how decorators can alter the result returned by a function.
  • Timing Function Execution: A highly practical application involves using a decorator to measure the execution time of a function. This is invaluable for performance profiling and identifying bottlenecks in code. The decorator would record the start time, execute the function, record the end time, and then print the duration, all without cluttering the original function’s logic.
  • Input and Output Validation: Decorators can be used to enforce constraints on function arguments or to validate the return value. For instance, a decorator could ensure that a function only receives positive integers or that its output adheres to a specific format. This promotes data integrity and reduces errors.
  • Logging and Authentication: More advanced applications include decorators for logging function calls, handling authentication checks before a function executes, or implementing caching mechanisms to store and retrieve previously computed results.

Through these varied examples, Reuven underscored that decorators are not just an academic curiosity but a powerful tool for injecting cross-cutting concerns (like logging, timing, validation) into functions in a clean, non-intrusive manner. This approach adheres to the “separation of concerns” principle, making code more modular, readable, and easier to maintain.

Decorators with Arguments and Stacking Decorators

Reuven further expanded the attendees’ understanding by demonstrating how to create decorators that accept arguments. This adds another layer of flexibility, allowing decorators to be configured at the time of their application. To achieve this, an outer function is required that takes the decorator’s arguments and then returns the actual decorator function. This creates a triple-nested function structure, where the outermost function handles arguments, the middle function is the actual decorator that takes the decorated function, and the innermost function is the wrapper.

He also covered the concept of “stacking decorators,” where multiple decorators are applied to a single function. When decorators are stacked, they are executed from the bottom up (closest to the function definition) to the top (furthest from the function definition). Each decorator wraps the function that results from the application of the decorator below it. This allows for the sequential application of various functionalities to a single function, building up complex behaviors from smaller, modular units. Reuven carefully explained the order of execution and how the output of one decorator serves as the input for the next, providing a clear mental model for understanding chained decorator behavior.

Preserving Metadata with functools.wraps

A common side effect of using decorators is the loss of the decorated function’s original metadata, such as its name (__name__), docstring (__doc__), and module (__module__). When a decorator replaces the original function with its wrapper, the metadata of the wrapper function is what becomes visible. This can complicate debugging, introspection, and documentation.

Reuven introduced functools.wraps as the standard solution to this problem. functools.wraps is itself a decorator that can be applied to the wrapper function within your custom decorator. When used, it copies the relevant metadata from the original function to the wrapper function, effectively “wrapping” the metadata along with the code.

from functools import wraps

def my_decorator(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
##### ... decorator logic ...
        return func(*args, **kwargs)
    return wrapper

This simple yet crucial addition ensures that decorated functions retain their original identity and documentation, making them behave more like their undecorated counterparts. Reuven stressed the importance of using functools.wraps in all custom decorators to avoid unexpected behavior and maintain code clarity, a best practice for any Python developer working with decorators.

Extending Decorator Concepts: Classes as Decorators and Decorating Classes

Towards the end of the tutorial, Reuven touched upon more advanced decorator patterns, including the use of classes as decorators and the application of decorators to classes themselves.

  • Classes as Decorators: While functions are the most common way to define decorators, classes can also serve as decorators. This is achieved by implementing the __call__ method in the class, making instances of the class callable. The __init__ method typically takes the function to be decorated, and the __call__ method acts as the wrapper, executing the decorated function along with any additional logic. This approach can be useful for decorators that need to maintain complex state or have more intricate setup/teardown procedures.
  • Decorating Classes: Decorators can also be applied to classes, similar to how they are applied to functions. When a class is decorated, the decorator receives the class object itself as an argument. The decorator can then modify the class, for example, by adding new methods, altering existing ones, or registering the class in some way. This is often used in frameworks for tasks like dependency injection, ORM mapping, or automatically adding mixins.

Reuven’s discussion of these more advanced scenarios demonstrated the full breadth of decorator applicability, showcasing how this powerful feature can be adapted to various architectural patterns and design needs within Python programming. This segment provided a glimpse into how decorators extend beyond simple function wrapping to influence the structure and behavior of entire classes, offering a flexible mechanism for meta-programming.

Hashtags: #Python #Decorators #PyConUS2024 #Programming #SoftwareDevelopment #Functions #Closures #PythonTricks #CodeQuality #ReuvenMLerner #LernerConsulting #LernerPython

PostHeaderIcon Onyxia: A User-Centric Interface for Data Scientists in the Cloud Age

Watch the video

Introduction

The team from INSEE presents Onyxia, an open-source, Kubernetes-based platform designed to offer flexible, collaborative, and powerful cloud environments for data scientists.

Rethinking Data Science Infrastructure

Traditional local development faces issues like configuration divergence, data duplication, and limited compute resources. Onyxia solves these by offering isolated namespaces, integrated object storage, and a seamless user interface that abstracts Kubernetes and S3 complexities.

Versatile Deployment

With a few clicks, users can launch preconfigured environments — including Jupyter notebooks, VS Code, Postgres, and MLflow — empowering fast innovation without heavy IT overhead. Organizations can extend Onyxia by adding custom services, ensuring future-proof, evolvable data labs.

Success Stories

Adopted across French universities and research labs, Onyxia enables students and professionals alike to work in secure, scalable, and fully-featured environments without managing infrastructure manually.

Conclusion

Onyxia democratizes access to powerful cloud tools for data scientists, streamlining collaboration and fostering innovation.

PostHeaderIcon Renovate/Dependabot: How to Take Control of Dependency Updates

At Devoxx France 2024, held in April at the Palais des Congrès in Paris, Jean-Philippe Baconnais and Lise Quesnel, consultants at Zenika, presented a 30-minute talk titled Renovate/Dependabot, ou comment reprendre le contrôle sur la mise à jour de ses dépendances. The session explored how tools like Dependabot and Renovate automate dependency updates, reducing the tedious and error-prone manual process. Through a demo and lessons from open-source and client projects, they shared practical tips for implementing Renovate, highlighting its benefits and pitfalls. 🚀

The Pain of Dependency Updates

The talk opened with a relatable skit: Lise, working on a side project (a simple Angular 6 app showcasing women in tech), admitted to neglecting updates due to the effort involved. Jean-Philippe emphasized that this is a common issue across projects, especially in microservice architectures with numerous components. Updating dependencies is critical for:

  • Security: Applying patches to reduce exploitable vulnerabilities.
  • Features: Accessing new functionalities.
  • Bug Fixes: Benefiting from the latest corrections.
  • Performance: Leveraging optimizations.
  • Attractiveness: Using modern tech stacks (e.g., Node 20 vs. Node 8) to appeal to developers.

However, the process is tedious, repetitive, and complex due to transitive dependencies (e.g., a median of 683 for NPM projects) and cascading updates, where one update triggers others.

Automating with Dependabot and Renovate

Dependabot (acquired by GitHub) and Renovate (from Mend) address this by scanning project files (e.g., package.json, Maven POM, Dockerfiles) and opening pull requests (PRs) or merge requests (MRs) for available updates. These tools:

  • Check registries (NPM, Maven Central, Docker Hub) for new versions.
  • Provide visibility into dependency status.
  • Save time by automating version checks, especially in microservice setups.
  • Enhance reactivity, critical for applying security patches quickly.

Setting Up the Tools

Dependabot: Configured via a dependabot.yml file, specifying ecosystems (e.g., NPM), directories, and update schedules (e.g., weekly). On GitHub, it integrates natively via project settings. GitLab users can use a similar approach.

# dependabot.yml
version: 2
updates:
  - package-ecosystem: "npm"
    directory: "/"
    schedule:
      interval: "weekly"

Renovate: Configured via a renovate.json file, extending default presets. It supports GitHub and GitLab via apps or CI/CD pipelines (e.g., GitLab CI with a Docker image). For self-hosted setups, Renovate can run as a Docker container or Kubernetes CronJob.

# renovate.json
{
  "extends": [
    "config:recommended"
  ]
}

In their demo, Jean-Philippe and Lise showcased Renovate on a GitLab project, using a .gitlab-ci.yml pipeline to run Renovate on a schedule, creating MRs for updates like rxjs (from 6.3.2 to 6.6.7).

Customizing Renovate

Renovate’s strength lies in its flexibility through presets and custom configurations:

    • Presets: Predefined rules (e.g., npm:unpublishSafe waits 3 days before proposing updates). Presets can extend others, forming a hierarchy (e.g., config:recommended extends base presets).
    • Custom Presets: Organizations can define reusable configs in a dedicated repository (e.g., renovate-config) and apply them across projects.
// renovate-config/default.json
{
  "extends": [
    "config:recommended",
    ":npm"
  ]
}
    • Grouping Updates: Combine related updates (e.g., all ESLint packages) using packageRules or presets like group:recommendedLinters to reduce PR noise.
{
  "packageRules": [
    {
      "matchPackagePatterns": ["^eslint"],
      "groupName": "eslint packages"
    }
  ]
}
  • Dependency Dashboard: An issue tracking open, rate-limited, or ignored MRs, activated via the dependencyDashboard field or preset.

Going Further: Automerge and Beyond

To streamline updates, Renovate supports automerge, automatically merging MRs if the pipeline passes, relying on robust tests. Options include:

  • automerge: true for all updates.
  • automergeType: "pr" or strategy for specific behaviors.
  • Presets like automerge:patch for patch updates only.

The demo showed an automerged rxjs update, triggering a new release (v1.2.1) via semantic-release, tagged, and deployed to Google Cloud Run. A failed Angular update (due to a major version gap) demonstrated how failing tests block automerge, ensuring safety.

Renovate can also update itself and its configuration (e.g., deprecated fields) via the config:migration preset, creating MRs for self-updates.

Lessons Learned and Recommendations

From their experiences, Jean-Philippe and Lise shared key tips:

  • Manage PR Overload: Limit concurrent PRs (e.g., prConcurrentLimit: 5) and group related updates to reduce noise.
  • Use Schedules: Run Renovate at off-peak times (e.g., nightly) to avoid overloading CI runners and impacting production deployments.
  • Ensure Robust Tests: Automerge relies on trustworthy tests; weak test coverage can lead to broken builds.
  • Balance Frequency: Frequent runs catch updates quickly but risk conflicts; infrequent runs may miss critical patches.
  • Monitor Resource Usage: Excessive pipelines can strain runners and increase costs in autoscaling environments (e.g., cloud platforms).
  • Handle Transitive Dependencies: Renovate manages them like direct dependencies, but cascading updates require careful review.
  • Support Diverse Ecosystems: Renovate works well with Java (e.g., Spring Boot, Quarkus), Scala, and NPM, with grouping to manage high-dependency ecosystems like NPM.
  • Internal Repositories: Configure Renovate to scan private registries by specifying URLs.
  • Major Updates: Use presets to stage major updates incrementally, avoiding risky automerge for breaking changes.

Takeaways

Jean-Philippe and Lise’s talk highlighted how Dependabot and Renovate transform dependency management from a chore to a streamlined process. Their demo and practical advice showed how Renovate’s flexibility—via presets, automerge, and dashboards—empowers teams to stay secure and up-to-date, especially in complex microservice environments. However, success requires careful configuration, robust testing, and resource management to avoid overwhelming teams or infrastructure. 🌟

PostHeaderIcon [DevoxxUK2024] Project Leyden: Capturing Lightning in a Bottle by Per Minborg

Per Minborg, a seasoned member of Oracle’s Core Library team, delivered an insightful session at DevoxxUK2024, unveiling the ambitions of Project Leyden, a transformative initiative to enhance Java application performance. Focused on slashing startup time, accelerating warmup, and reducing memory footprint, Per’s talk explores how Java can evolve to meet modern demands while preserving its dynamic nature. By strategically shifting computations to optimize execution, Project Leyden introduces innovative techniques like condensers and enhanced Class Data Sharing (CDS). This session provides a roadmap for developers seeking to harness Java’s potential in high-performance environments, balancing flexibility with efficiency.

The Vision of Project Leyden

Per begins by outlining the core objectives of Project Leyden: improving startup time, warmup time, and memory footprint. Startup time, the duration from launching an application to its first meaningful output (e.g., a “Hello World” or serving a web request), is critical for user experience. Warmup time, the period until an application reaches peak performance through JIT compilation, can hinder responsiveness in dynamic systems. Footprint, encompassing memory and storage use, impacts scalability, especially in cloud environments. Per emphasizes that the best approach is to eliminate unnecessary computations, but when that’s not feasible, shifting them temporally—either earlier to compile time or later to runtime—can yield significant gains. This philosophy underpins Leyden’s strategy to refine Java’s execution model.

Shifting Computations for Efficiency

A cornerstone of Project Leyden is the concept of temporal computation shifting. Per explains that Java’s dynamic nature—encompassing dynamic class loading, JIT compilation, and runtime optimizations—enables expressive programming but can inflate startup and warmup times. By moving computations to build time, such as through constant folding or ahead-of-time (AOT) compilation, Leyden reduces runtime overhead. Alternatively, lazy evaluation postpones non-critical tasks, streamlining startup. Per introduces condensers, a novel mechanism that transforms program representations by shifting computations earlier, adding metadata, or imposing constraints on dynamism. Condensers are composable, meaning-preserving, and selectable, allowing developers to tailor optimizations based on application needs. For instance, a condenser might precompile lambda expressions into bytecode at build time, slashing runtime costs.

Enhancing Class Data Sharing (CDS)

Per delves into Class Data Sharing (CDS), a long-standing Java feature that Project Leyden enhances to achieve dramatic performance boosts. CDS allows pre-initialized JDK classes to be stored in a file, bypassing costly class loading during startup. With CDS++, Leyden extends this to include application classes, compiled code, and resolved constant pool references. Per shares compelling benchmarks: a test compiling 100 small Java files achieved a 2x startup improvement, while an XML parsing workload saw an 8x boost. For the Spring Pet Clinic benchmark, Leyden’s optimizations, including early class loading and cached compiled code, yielded up to 4x faster startup. These gains stem from a training run approach, where a representative execution gathers profiling data to inform optimizations, ensuring compatibility across platforms.

Balancing Dynamism and Performance

Java’s dynamism—encompassing dynamic typing, class loading, and reflection—empowers developers but complicates optimization. Per proposes selective constraints to balance this trade-off. For example, developers can restrict dynamic class loading for specific modules, enabling aggressive optimizations without sacrificing Java’s flexibility. The stable value feature, initially part of Leyden but now a standalone JEP, allows delayed initialization of final fields while maintaining performance akin to compile-time constants. Per illustrates this with a Fibonacci computation example, where memoization using stable values drastically reduces recursive overhead. By offering a “mixer board” of concessions, Leyden empowers developers to fine-tune performance, ensuring compatibility and preserving program semantics across diverse use cases.

Links:

PostHeaderIcon [DevoxxGR2024] Small Steps Are the Fastest Way Forward: Navigating Chaos in Software Development

Sander Hoogendoorn, CTO at iBOOD, delivered an engaging and dynamic talk at Devoxx Greece 2024, addressing the challenges of software development in a rapidly changing world. Drawing from his extensive experience as a programmer, architect, and leader, Sander explored how organizations can overcome technical debt and the innovator’s dilemma by embracing continuous experimentation, small teams, and short delivery cycles. His narrative, peppered with real-world anecdotes, offered practical strategies for navigating complexity and fostering innovation in a post-agile landscape.

Understanding Technical Debt and Quality

Sander opened by tackling the elusive concept of software quality, contrasting it with tangible products like coffee or cars, where higher quality correlates with higher cost. In software, quality—encompassing maintainability, testability, and reliability—is harder to quantify and often lacks a direct price relationship. He introduced Ward Cunningham’s concept of technical debt, where initial shortcuts accelerate development but, if unaddressed, can cripple organizations. Sander shared an example from an insurance company with 18 million lines of COBOL and 12 million lines of Java, where outdated code and retiring developers created a maintenance nightmare. Similarly, at iBOOD, a patchwork of systems led to “technical death,” where maintenance consumed all resources, stifling innovation.

To mitigate technical debt, Sander advocated for continuous refactoring as part of daily work, rather than a separate task requiring approval. He emphasized finding a balance between quality and cost, tailored to the organization’s goals—whether building a quick mobile app or a long-lasting banking system.

The Innovator’s Dilemma and Continuous Renovation

Sander introduced the innovator’s dilemma, where successful products reach a saturation point, and new entrants with innovative technologies disrupt the market. He recounted his experience at a company that pioneered smart thermostats but failed to reinvent itself, leading to its acquisition and dissolution. To avoid this fate, organizations must operate in “continuous renovation mode,” maintaining existing systems while incrementally building new features. This approach, inspired by John Gall’s law—that complex systems evolve from simple, working ones—requires small, iterative steps rather than large-scale rebuilds.

At iBOOD, Sander implemented this by allocating 70% of resources to innovation and 30% to maintenance, ensuring the “shop stays open” while progressing toward strategic goals. He emphasized the importance of defining a clear “dot on the horizon,” such as iBOOD’s ambition to become Europe’s leading deal site, to guide these efforts.

Navigating Complexity with the Cynefin Framework

To navigate the chaotic and complex nature of modern software development, Sander introduced the Cynefin framework, which categorizes problems into clear, complicated, complex, and chaotic zones. Most software projects reside in the complex zone, where no best practices exist, and experimentation is essential. He cautioned against treating complex problems as complicated, citing failed attempts at iBOOD’s insurance client to rebuild systems from scratch. Instead, organizations should run small experiments, accepting the risk of failure as a path to learning.

Sander illustrated this with iBOOD’s decision-making process, where a cross-functional team evaluates ideas based on their alignment with strategic goals, feasibility, and size. Ideas too large are broken into smaller pieces, ensuring manageable experiments that deliver quick feedback.

Delivering Features in Short Cycles

Sander argued that traditional project-based approaches and even Scrum’s sprint model are outdated in a world demanding rapid iteration. He advocated for continuous delivery, where features are deployed multiple times daily, minimizing dependencies and enabling immediate feedback. At iBOOD, features are released in basic versions, refined based on business input, and prioritized over less critical tasks. This approach, supported by automated CI/CD pipelines and extensive testing, ensures quality is built into the process, reducing reliance on manual inspections.

He shared iBOOD’s pipeline, which includes unit tests, static code analysis, and production testing, allowing developers to code with confidence. By breaking features into small, independent services, iBOOD achieves flexibility and resilience, avoiding the pitfalls of monolithic systems.

Empowering Autonomous Micro-Teams

Finally, Sander addressed the human element of software development, arguing that the team, not the individual, is the smallest unit of delivery. He advocated for autonomous “micro-teams” that self-organize around tasks, drawing an analogy to jazz ensembles where musicians form sub-groups based on skills. At iBOOD, developers choose their tasks and collaborators, fostering learning and flexibility. This autonomy, while initially uncomfortable for some, encourages ownership and innovation.

Sander emphasized minimizing rules to promote critical thinking, citing an Amsterdam experiment where removing traffic signs improved road safety through communication. By eliminating Scrum rituals like sprints and retrospectives, iBOOD’s teams focus on solving one problem daily, enhancing efficiency and morale.

Conclusion

Sander Hoogendoorn’s talk at Devoxx Greece 2024 offered a refreshing perspective on thriving in software development’s chaotic landscape. By addressing technical debt, embracing the innovator’s dilemma, and leveraging the Cynefin framework, organizations can navigate complexity through small, experimental steps. Continuous delivery and autonomous micro-teams further empower teams to innovate rapidly and sustainably. Sander’s practical insights, grounded in his leadership at iBOOD, provide a compelling blueprint for organizations seeking to evolve in a post-agile world.

Links:

PostHeaderIcon [DevoxxFR 2024] Debugging Your Salary: Winning Strategies for Successful Negotiation

At Devoxx France 2024, Shirley Almosni Chiche, an independent IT recruiter and career agent, delivered a dynamic session titled “Debuggez votre salaire ! Mes stratégies gagnantes pour réussir sa négociation salariale.” With over a decade of recruitment experience, Shirley unpacked the complexities of salary negotiation, offering actionable strategies to overcome common obstacles. Through humor, personas, and real-world insights, she empowered developers to approach salary discussions with confidence and preparation, transforming a daunting process into a strategic opportunity.

Shirley opened with a candid acknowledgment: salary discussions are fraught with tension, myths, and frustrations. Drawing from her role at Build RH, her recruitment firm, she likened salary negotiation to a high-stakes race, where candidates endure lengthy recruitment processes only to face disappointing offers. Common employer excuses—“we must follow the salary grid,” “we can’t pay more than existing staff,” or “the budget is tight”—often derail negotiations, leaving candidates feeling undervalued.

To frame her approach, Shirley introduced six “bugs” that justify low salaries, each paired with a persona representing typical employer archetypes. These included the rigid “Big Corp” manager enforcing salary grids, the team-focused “Didier Deschamps” avoiding pay disparities, and the budget-conscious “François Damiens” citing financial constraints. Other personas, like the overly technical “Elon” scrutinizing code, the relentless negotiator “Patrick,” and the discriminatory “Hubert,” highlighted diverse challenges candidates face.

Shirley shared market insights, noting a 2023–2024 tech slowdown with 200,000 global layoffs, reduced venture funding, and a shift toward cost-conscious industries like banking and retail. This context, she argued, demands strategic preparation to secure fair compensation.

Countering the Bugs: Tactical Responses

For each bug, Shirley offered counter-arguments rooted in empathy and alignment with employer priorities. Against the salary grid, she advised exploring non-salary benefits like profit-sharing or PERCO plans, common in large firms. Using a “mirror empathy” tactic, candidates can frame salary needs in the employer’s language—e.g., linking pay to productivity. Challenging outdated grids by highlighting market research or internal surveys also strengthens arguments.

For the “Didier Deschamps” persona, Shirley suggested emphasizing unique skills (e.g., full-stack expertise in a backend-heavy team) to justify higher pay without disrupting team cohesion. Proposing contributions like speaking at conferences or aiding recruitment can further demonstrate value. She shared a success story where a candidate engaged the team directly, securing a better offer through collective dialogue.

When facing “François Damiens” and financial constraints, Shirley recommended focusing on risk mitigation. For startups, candidates can negotiate stock options or bonuses, arguing that their expertise accelerates product delivery, saving recruitment costs. Highlighting polyvalence—combining skills like development, data, and security—positions candidates as multi-role assets, justifying premium pay.

For technical critiques from “Elon,” Shirley urged immediate feedback post-interview to address perceived weaknesses. If gaps exist, candidates should negotiate training opportunities to ensure long-term fit. Pointing out evaluation mismatches (e.g., testing frontend skills for a backend role) can redirect discussions to relevant strengths.

Against “Patrick,” the negotiator, Shirley advised setting firm boundaries—two rounds of negotiation max—to avoid endless haggling. Highlighting project flaws tactfully and aligning expertise with business goals can shift the dynamic from adversarial to collaborative.

Addressing Discrimination: A Sobering Reality

Shirley tackled the “Hubert” persona, representing discriminatory practices, with nuance. Beyond gender pay gaps, she highlighted biases against older candidates, neurodivergent individuals, those with disabilities, and career switchers. Citing her mother’s experience as a Maghrebi woman facing a 20% pay cut, Shirley acknowledged the harsh realities for marginalized groups.

Rather than dismissing discriminatory offers outright, she advised viewing them as career stepping stones. Candidates can leverage such roles for training or experience, using “mirror empathy” to negotiate non-salary benefits like remote work or learning opportunities. While acknowledging privilege, Shirley urged resilience, encouraging candidates to “lend an ear to learning” and rebound from setbacks.

Mastering Preparation: Anticipating the Negotiation

Shirley emphasized proactive preparation as the cornerstone of successful negotiation. Understanding one’s relationship with money—shaped by upbringing, traumas, or social pressures—is critical. Some candidates undervalue themselves due to impostor syndrome, while others see salary as a status symbol or family lifeline. Recognizing these drivers informs negotiation strategies.

She outlined key preparation steps:

  • Job Selection: Target roles within your expertise and in high-paying sectors (e.g., cloud, security) for better leverage. Data roles can yield 7–13% salary gains.
  • Market Research: Use resources like Choose Your Boss or APEC barometers to benchmark salaries. Shirley noted Île-de-France salaries exceed regional ones by 10–15K, with a 70K ceiling for seniors in 2023.
  • Company Analysis: Assess financial health via LinkedIn or job ad longevity. Long-posted roles signal negotiation flexibility.
  • Recruiter Engagement: Treat initial recruiter calls as data-gathering opportunities, probing team culture, hiring urgency, and technical expectations.
  • Value Proposition: Highlight impact—product roadmaps, technical migrations, or team mentoring—early in interviews to set a premium tone.

Shirley cautioned against oversharing personal financial details (e.g., current salary or expenses) during salary discussions. Instead, provide a specific range (e.g., “around 72K”) based on market data and role demands. Mentioning parallel offers tactfully can spur employers to act swiftly.

Sealing the Deal: Confidence and Coherence

In the final negotiation phase, Shirley advised a 48-hour reflection period after receiving an offer, consulting trusted peers for perspective. Counteroffers should be fact-based, reiterating interview insights and using empathetic language. Timing matters—avoid Mondays or late Fridays for discussions.

Citing APEC data, Shirley noted that 80% of executives who negotiate are satisfied, with 65% securing their target salary or higher. She urged candidates to remain consistent, avoiding last-minute demands that erode trust. Beyond salary, consider workplace culture, inclusion, and work-life balance to ensure long-term fit.

Shirley closed with a rallying call: don’t undervalue your skills or settle for less. By blending preparation, empathy, and resilience, candidates can debug their salary negotiations and secure rewarding outcomes.

Hashtags: #SalaryNegotiation #DevoxxFrance #CareerDevelopment #TechRecruitment

PostHeaderIcon [PyConUS 2024] How Python Harnesses Rust through PyO3

David Hewitt, a key contributor to the PyO3 library, delivered a comprehensive session at PyConUS 2024, unraveling the mechanics of integrating Rust with Python. As a Python developer for over a decade and a lead maintainer of PyO3, David provided a detailed exploration of how Rust’s power enhances Python’s ecosystem, focusing on PyO3’s role in bridging the two languages. His talk traced the journey of a Python function call to Rust code, offering insights into performance, security, and concurrency, while remaining accessible to those unfamiliar with Rust.

Why Rust in Python?

David began by outlining the motivations for combining Rust with Python, emphasizing Rust’s reliability, performance, and security. Unlike Python, where exceptions can arise unexpectedly, Rust’s structured error handling via pattern matching ensures predictable behavior, reducing debugging challenges. Performance-wise, Rust’s compiled nature offers significant speedups, as seen in libraries like Pydantic, Polars, and Ruff. David highlighted Rust’s security advantages, noting its memory safety features prevent common vulnerabilities found in C or C++, making it a preferred choice for companies like Microsoft and Google. Additionally, Rust’s concurrency model avoids data races, aligning well with Python’s evolving threading capabilities, such as sub-interpreters and free-threading in Python 3.13.

PyO3: Bridging Python and Rust

Central to David’s talk was PyO3, a Rust library that facilitates seamless integration with Python. PyO3 allows developers to write Rust code that runs within a Python program or vice versa, using procedural macros to generate Python-compatible modules. David explained how tools like Maturin and setup-tools-rust simplify project setup, enabling developers to compile Rust code into native libraries that Python imports like standard modules. He emphasized PyO3’s goal of maintaining a low barrier to entry, with comprehensive documentation and a developer guide to assist Python programmers venturing into Rust, ensuring a smooth transition across languages.

Tracing a Function Call

David took the audience on a technical journey, tracing a Python function call through PyO3 to Rust code. Using a simple word-counting function as an example, he showed how a Rust implementation, marked with PyO3’s @pyfunction attribute, mirrors Python’s structure while offering performance gains of 2–4x. He dissected the Python interpreter’s bytecode, revealing how the CALL instruction invokes PyObject_Vectorcall, which resolves to a Rust function pointer via PyO3’s generated code. This “trampoline” handles critical safety measures, such as preventing Rust panics from crashing the Python interpreter and managing the Global Interpreter Lock (GIL) for safe concurrency. David’s step-by-step breakdown clarified how arguments are passed and converted, ensuring seamless execution.

Future of Rust in Python’s Ecosystem

Concluding, David reflected on Rust’s growing adoption in Python, citing over 350 projects monthly uploading Rust code to PyPI, with downloads exceeding 3 billion annually. He predicted that Rust could rival C/C++ in the Python ecosystem within 2–4 years, driven by its reliability and performance. Addressing concurrency, David discussed how PyO3 could adapt to Python’s sub-interpreters and free-threading, potentially enforcing immutability to simplify multithreaded interactions. His vision for PyO3 is to enhance Python’s strengths without replacing it, fostering a symbiotic relationship that empowers developers to leverage Rust’s precision where needed.

Hashtags: #Rust #PyO3 #Python #Performance #Security #PyConUS2024 #DavidHewitt #Pydantic #Polars #Ruff

PostHeaderIcon [DevoxxUK2024] Productivity is Messing Around and Having Fun by Trisha Gee & Holly Cummins

In their DevoxxUK2024 talk, Trisha Gee (Gradle) and Holly Cummins (Red Hat, Quarkus) explore developer productivity through the lens of joy and play, challenging conventional metrics like lines of code. They argue that developer satisfaction drives business success, drawing on Fred Brooks’ The Mythical Man-Month to highlight why programmers enjoy crafting, solving puzzles, and learning. However, they note that developers spend only ~32% of their time coding, with the rest consumed by toil (e.g., waiting for builds, context-switching).

The speakers critique metrics like lines of code, citing examples where incentivizing code volume led to bloated, unmaintainable codebases (e.g., ASCII art comments). They warn against AI tools like Copilot generating verbose, unnecessary code (e.g., redundant getters/setters in Quarkus), which increases technical debt. Instead, they advocate for frameworks like Quarkus that reduce boilerplate through build-time bytecode inspection, enabling concise, expressive code.

Trisha and Holly introduce the SPACE framework (Satisfaction, Performance, Activity, Communication, Efficiency) as a holistic approach to measuring productivity, emphasizing developer well-being and flow over raw output. They highlight the importance of mental space for creativity, citing the brain’s default mode network, activated during low-stimulation activities like showering, running, or knitting. They encourage embracing “boredom” and play, supported by research showing happier developers are more productive. The talk critiques flawed metrics (e.g., McKinsey’s) and warns against management misconceptions, like assuming developers are replaceable by AI.

Links: YouTube, LinkedIn

PostHeaderIcon [DevoxxFR 2024] Staff Engineer: A Vital Role in Technical Leadership

My former estimated colleague François Nollen, a technical expert at SNCF Connect & Tech, delivered an engaging talk at Devoxx France 2024 on the role of the Staff Engineer. Often overshadowed by the more familiar Engineering Manager position, the Staff Engineer role is gaining traction as a critical path for technical leadership without management responsibilities. François shared his journey and insights into how Staff Engineers operate at SNCF Connect, offering a blueprint for developers aspiring to influence organizations at scale. This post explores the role’s responsibilities, its impact, and its relevance in modern tech organizations.

Defining the Staff Engineer Role

The Staff Engineer role, rooted in Silicon Valley’s tech giants, represents a senior technical contributor who drives impact across multiple teams without managing them directly. François described Staff Engineers as versatile problem-solvers, blending deep technical expertise with strong collaboration skills. Unlike Engineering Managers, who focus on team management, Staff Engineers tackle complex technical challenges, set standards, and foster innovation. At SNCF Connect, they are called “Technical Expertise Referents,” reflecting their role in guiding technical strategy and mentoring teams.

A Day in the Life

Staff Engineers at SNCF Connect enjoy significant autonomy, with no fixed daily tasks. François outlined a typical day, which begins with monitoring communication channels like Slack to identify team challenges. They contribute code, conduct reviews, and drive strategic initiatives, such as defining best practices or evaluating technical risks. Unlike team-bound developers, Staff Engineers operate at an organizational level, collaborating with engineering, HR, and communication teams to align technical and business goals. This broad scope requires a balance of technical depth and interpersonal finesse.

Impact and Collaboration

The influence of a Staff Engineer stems from their expertise and ability to inspire trust, not formal authority. François highlighted their role in unblocking teams, accelerating projects, and shaping technical strategy alongside Principal Engineers. At SNCF Connect, Staff Engineers work as a collective, amplifying their impact on cross-cutting initiatives like DevOps and continuous delivery. This collaborative approach contrasts with traditional roles like architects, who may be disconnected from delivery, making Staff Engineers integral to dynamic, agile environments.

Is It Right for You?

François posed a reflective question: is the Staff Engineer role suited for everyone? It demands extensive technical experience, organizational awareness, and strong communication skills. Developers who thrive on solving complex problems, mentoring others, and driving systemic change without managing teams may find this path rewarding. For organizations, Staff Engineers offer a framework to retain and empower experienced developers, avoiding the pitfalls of promoting them into unsuitable management roles, as per the Peter Principle.

Hashtags: #StaffEngineer #TechnicalLeadership #DevoxxFrance #FrançoisNollen #SNCFConnect #Engineering #Agile

PostHeaderIcon [DevoxxFR 2024] Going AOT: Mastering GraalVM for Java Applications

Alina Yurenko 🇺🇦 , a developer advocate at Oracle Labs, captivated audiences at Devoxx France 2024 with her deep dive into GraalVM’s ahead-of-time (AOT) compilation for Java applications. With a passion for open-source and community engagement, Alina explored how GraalVM’s Native Image transforms Java applications into compact, high-performance native executables, ideal for cloud environments. Through demos and practical guidance, she addressed building, testing, and optimizing GraalVM applications, debunking myths and showcasing its potential. This post unpacks Alina’s insights, offering a roadmap for adopting GraalVM in production.

GraalVM and Native Image Fundamentals

Alina introduced GraalVM as both a high-performance JDK and a platform for AOT compilation via Native Image. Unlike traditional JVMs, GraalVM allows developers to run Java applications conventionally or compile them into standalone native executables that don’t require a JVM at runtime. This dual capability, built on over a decade of research at Oracle Labs, offers Java’s developer productivity alongside native performance benefits like faster startup and lower resource usage. Native Image, GA since 2019, analyzes an application’s bytecode at build time, identifying reachable code and dependencies to produce a compact executable, eliminating unused code and pre-populating the heap for instant startup.

The closed-world assumption underpins this process: all application behavior must be known at build time, unlike the JVM’s dynamic runtime optimizations. This enables aggressive optimizations but requires careful handling of dynamic features like reflection. Alina demonstrated this with a Spring Boot application, which started in 1.3 seconds on GraalVM’s JVM but just 47 milliseconds as a native executable, highlighting its suitability for serverless and microservices where startup speed is critical.

Benefits Beyond Startup Speed

While fast startup is a hallmark of Native Image, Alina emphasized its broader advantages, especially for long-running applications. By shifting compilation, class loading, and optimization to build time, Native Image reduces runtime CPU and memory usage, offering predictable performance without the JVM’s warm-up phase. A Spring Pet Clinic benchmark showed Native Image matching or slightly surpassing the JVM’s C2 compiler in peak throughput, a testament to two years of optimization efforts. For memory-constrained environments, Native Image excels, delivering up to 2–3x higher throughput per memory unit at heap sizes of 512MB to 1GB, as seen in throughput density charts.

Security is another benefit. By excluding unused code, Native Image reduces the attack surface, and dynamic features like reflection require explicit allow-lists, enhancing control. Alina also noted compatibility with modern Java frameworks like Spring Boot, Micronaut, and Quarkus, which integrate Native Image support, and a community-maintained list of compatible libraries on the GraalVM website, ensuring broad ecosystem support.

Building and Testing GraalVM Applications

Alina provided a practical guide for building and testing GraalVM applications. Using a Spring Boot demo, she showcased the Native Maven plugin, which streamlines compilation. The build process, while resource-intensive for large applications, typically stays within 2GB of memory for smaller apps, making it viable on CI/CD systems like GitHub Actions. She recommended developing and testing on the JVM, compiling to Native Image only when adding dependencies or in CI/CD pipelines, to balance efficiency and validation.

Dynamic features like reflection pose challenges, but Alina outlined solutions: predictable reflection works out-of-the-box, while complex cases may require JSON configuration files, often provided by frameworks or libraries like H2. A centralized GitHub repository hosts configs for popular libraries, and a tracing agent can generate configs automatically by running the app on the JVM. Testing support is robust, with JUnit and framework-specific tools like Micronaut’s test resources enabling integration tests in Native mode, often leveraging Testcontainers.

Optimizing and Future Directions

To achieve peak performance, Alina recommended profile-guided optimizations (PGO), where an instrumented executable collects runtime profiles to inform a final build, combining AOT’s predictability with JVM-like insights. A built-in ML model predicts profiles for simpler scenarios, offering 6–8% performance gains. Other optimizations include using the G1 garbage collector, enabling machine-specific flags, or building static images for minimal container sizes with distroless images.

Looking ahead, Alina highlighted two ambitious GraalVM projects: Layered Native Images, which pre-compile base images (e.g., JDK or Spring) to reduce build times and resource usage, and GraalOS, a platform for deploying native images without containers, eliminating container overhead. Demos of a LangChain for Java app and a GitHub crawler using Java 22 features showcased GraalVM’s versatility, running seamlessly as native executables. Alina’s session underscored GraalVM’s transformative potential, urging developers to explore its capabilities for modern Java applications.

Links:

Hashtags: #GraalVM #NativeImage #Java #AOT #AlinaYurenko #DevoxxFR2024