Posts Tagged ‘devops’
[DevoxxFR2025] Alert, Everything’s Burning! Mastering Technical Incidents
In the fast-paced world of technology, technical incidents are an unavoidable reality. When systems fail, the ability to quickly detect, diagnose, and resolve issues is paramount to minimizing impact on users and the business. Alexis Chotard, Laurent Leca, and Luc Chmielowski from PayFit shared their invaluable experience and strategies for mastering technical incidents, even as a rapidly scaling “unicorn” company. Their presentation went beyond just technical troubleshooting, delving into the crucial aspects of defining and evaluating incidents, effective communication, product-focused response, building organizational resilience, managing on-call duties, and transforming crises into learning opportunities through structured post-mortems.
Defining and Responding to Incidents
The first step in mastering incidents is having a clear understanding of what constitutes an incident and its severity. Alexis, Laurent, and Luc discussed how PayFit defines and categorizes technical incidents based on their impact on users and business operations. This often involves established severity levels and clear criteria for escalation. Their approach emphasized a rapid and coordinated response involving not only technical teams but also product and communication stakeholders to ensure a holistic approach. They highlighted the importance of clear internal and external communication during an incident, keeping relevant parties informed about the status, impact, and expected resolution time. This transparency helps manage expectations and build trust during challenging situations.
Technical Resolution and Product Focus
While quick technical mitigation to restore service is the immediate priority during an incident, the PayFit team stressed the importance of a product-focused approach. This involves understanding the user impact of the incident and prioritizing resolution steps that minimize disruption for customers. They discussed strategies for effective troubleshooting, leveraging monitoring and logging tools to quickly identify the root cause. Beyond immediate fixes, they highlighted the need to address the underlying issues to prevent recurrence. This often involves implementing technical debt reduction measures or improving system resilience as a direct outcome of incident analysis. Their experience showed that a strong collaboration between engineering and product teams is essential for navigating incidents effectively and ensuring that the user experience remains a central focus.
Organizational Resilience and Learning
Mastering incidents at scale requires building both technical and organizational resilience. The presenters discussed how PayFit has evolved its on-call rotation models to ensure adequate coverage while maintaining a healthy work-life balance for engineers. They touched upon the importance of automation in detecting and mitigating incidents faster. A core tenet of their approach was the implementation of structured post-mortems (or retrospectives) after every significant incident. These post-mortems are blameless, focusing on identifying the technical and process-related factors that contributed to the incident and defining actionable steps for improvement. By transforming crises into learning opportunities, PayFit continuously strengthens its systems and processes, reducing the frequency and impact of future incidents. Their journey over 18 months demonstrated that investing in these practices is crucial for any growing organization aiming to build robust and reliable systems.
Links:
- Alexis Chotard: https://www.linkedin.com/in/alexis-chotard/
- Laurent Leca: https://www.linkedin.com/in/laurent-leca/
- Luc Chmielowski: https://www.linkedin.com/in/luc-chmielowski/
- PayFit: https://payfit.com/
- Devoxx France LinkedIn: https://www.linkedin.com/company/devoxx-france/
- Devoxx France Bluesky: https://bsky.app/profile/devoxx.fr
- Devoxx France Website: https://www.devoxx.fr/
[DotJs2024] Becoming the Multi-armed Bandit
In the intricate ballet of software stewardship, where intuition waltzes with empiricism, resides the multi-armed bandit—a probabilistic oracle guiding choices amid uncertainty. Ben Halpern, co-founder of Forem and dev.to’s visionary steward, dissected this gem at dotJS 2024. A full-stack polymath blending code with community curation, Ben recounted its infusions across his odyssey—from parody O’Reilly covers viralizing memes to mutton-busting triumphs—framing bandits as bridges between artistic whimsy and scientific rigor, aligning devs with stakeholders in pursuit of optimal paths.
Ben’s prologue evoked dev.to’s genesis: Twitter-era jests birthing a creative agora, bandit logic A/B-testing post formats for engagement zeniths. The archetype—casino levers, pulls maximizing payouts—mirrors dev dilemmas: UI variants, feature rollouts, content cadences. Exploration probes unknowns; exploitation harvests proven yields. Ben advocated epsilon-greedy: baseline exploitation (1-ε pulls best arm), exploratory ventures (ε samples alternatives), ε tuning via Thompson sampling for contextual nuance.
Practical infusions abounded. Load balancing: bandit selects origins, favoring responsive backends. Feature flags: variants vie, metrics crown victors. Smoke tests: endpoint probes, failures demote. ML pipelines: hyperparameter hunts, models ascend via validation. Ben’s dev.to saga: title A/Bs, bandit-orchestrated, surfacing resonant headlines sans bias. Organizational strata: nascent projects revel in exploration—ideation fests yielding prototypes; maturity mandates exploitation—scaling victors, pruning pretenders. This lexicon fosters accord: explorers and scalers, once at odds, synchronize via phases, preempting pivots’ friction.
Caution tempered zeal: bandits thrive on voluminous outcomes, not trivial toggles; overzealous testing paralyzes. As AI cheapens variants—code gen’s bounty—feedback scaffolds intensify, bandits as arbiters ensuring quality amid abundance. Ben’s coda: wield judiciously, blending craft’s flair with datum’s discipline for endeavors audacious yet assured.
Algorithmic Essence and Variants
Ben unpacked epsilon-greedy’s equilibrium: 90% best-arm fealty, 10% novelty nudges; Thompson’s Bayesian ballet contextualizes. UCB (Upper Confidence Bound) optimism tempers regret, ideal for sparse signals—dev.to’s post tweaks, engagement echoes guiding refinements.
Embeddings in Dev Workflows
Balancing clusters bandit-route requests; flags unleash cohorts, telemetry triumphs. ML’s parameter quests, smoke’s sentinel sweeps—all bandit-bolstered. Ben’s ethos: binary pass-fails sideline; array assays exalt, infrastructure for insight paramount.
Strategic Alignment and Prudence
Projects arc: explore’s ideation inferno yields scale’s forge. Ben bridged divides—stakeholder symposia in bandit vernacular—averting misalignment. Overreach warns: grand stakes summon science; mundane mandates art’s alacrity, future’s variant deluge demanding deft discernment.
Links:
[DevoxxFR2025] Simplify Your Ideas’ Containerization!
For many developers and DevOps engineers, creating and managing Dockerfiles can feel like a tedious chore. Ensuring best practices, optimizing image layers, and keeping up with security standards often add friction to the containerization process. Thomas DA ROCHA from Lenra, in his presentation, introduced Dofigen as an open-source command-line tool designed to simplify this. He demonstrated how Dofigen allows users to generate optimized and secure Dockerfiles from a simple YAML or JSON description, making containerization quicker, easier, and less error-prone, even without deep Dockerfile expertise.
The Pain Points of Dockerfiles
Thomas began by highlighting the common frustrations associated with writing and maintaining Dockerfiles. These include:
– Complexity: Writing effective Dockerfiles requires understanding various instructions, their order, and how they impact caching and layer size.
– Time Consumption: Manually writing and optimizing Dockerfiles for different projects can be time-consuming.
– Security Concerns: Ensuring that images are built securely, minimizing attack surface, and adhering to security standards can be challenging without expert knowledge.
– Lack of Reproducibility: Small changes or inconsistencies in the build environment can sometimes lead to non-reproducible images.
These challenges can slow down development cycles and increase the risk of deploying insecure or inefficient containers.
Introducing Dofigen: Dockerfile Generation Simplified
Dofigen aims to abstract away the complexities of Dockerfile creation. Thomas explained that instead of writing a Dockerfile directly, users provide a simplified description of their application and its requirements in a YAML or JSON file. This description includes information such as the base image, application files, dependencies, ports, and desired security configurations. Dofigen then takes this description and automatically generates an optimized and standards-compliant Dockerfile. This approach allows developers to focus on defining their application’s needs rather than the intricacies of Dockerfile syntax and best practices. Thomas showed a live coding demo, transforming a simple application description into a functional Dockerfile using Dofigen.
Built-in Best Practices and Security Standards
A key advantage of Dofigen is its ability to embed best practices and security standards into the generated Dockerfiles automatically. Thomas highlighted that Dofigen incorporates knowledge about efficient layering, reducing image size, and minimizing the attack surface by following recommended guidelines. This means users don’t need to be experts in Dockerfile optimization or security to create robust images. The tool handles these aspects automatically based on the provided high-level description. Thomas might have demonstrated how Dofigen helps in creating multi-stage builds or incorporating user and permission best practices, which are crucial for building secure production-ready images. By simplifying the process and baking in expertise, Dofigen empowers developers to containerize their applications quickly and confidently, ensuring that the resulting images are not only functional but also optimized and secure. The open-source nature of Dofigen also allows the community to contribute to improving its capabilities and keeping up with evolving best practices and security recommendations.
Links:
- Thomas DA ROCHA: https://www.linkedin.com/in/thomasdarocha/
- Lenra: https://www.lenra.io/
- Dofigen on GitHub: https://github.com/lenra-io/dofigen
- Devoxx France LinkedIn: https://www.linkedin.com/company/devoxx-france/
- Devoxx France Bluesky: https://bsky.app/profile/devoxx.fr
- Devoxx France Website: https://www.devoxx.fr/
[RivieraDev2025] Dhruv Kumar – Platform Engineering + AI: The Next-Gen DevOps
At Riviera DEV 2025, Dhruv Kumar delivered an engaging presentation on platform engineering, a discipline reshaping software delivery by addressing modern development challenges. Stepping in for Silva Devi, Dhruv, a senior product manager at CloudBees, explored how platform engineering, augmented by artificial intelligence, streamlines workflows, enhances developer productivity, and mitigates the complexities of cloud-native environments. His talk illuminated the transformative potential of internal developer platforms (IDPs) and AI-driven automation, offering a vision for a more efficient and secure software development lifecycle (SDLC).
The Challenges of Modern Software Development
Dhruv began by highlighting the evolving responsibilities of developers, who now spend only about 11% of their time coding, according to a survey by software.com. The remaining time is consumed by non-coding tasks such as testing, deployment, and managing security vulnerabilities. The shift-left movement, while intended to empower developers by integrating testing and deployment earlier in the process, often burdens them with tasks outside their core expertise. This is compounded by the transition to cloud environments, which introduces complex microservices architectures and distributed systems, creating navigation challenges and integration headaches.
Additionally, the rise of AI has accelerated software development, increasing code volume and tool proliferation, while supply chain attacks exploit these complexities, demanding constant vigilance from developers. Dhruv emphasized that these challenges—fragmented workflows, heightened security risks, and tool overload—necessitate a new approach to streamline processes and empower teams.
Platform Engineering: A Unified Approach
Platform engineering emerges as a solution to these issues, providing a cohesive framework for software delivery. Dhruv defined it as the discipline of designing toolchains and workflows that enable self-service capabilities for engineering teams in the cloud-native era. Central to this is the concept of an internal developer platform (IDP), which integrates tools and processes across the SDLC, from coding to deployment. By establishing a common SDLC model and vocabulary, platform engineering ensures that stakeholders—developers, QA, and security teams—share a unified understanding, reducing miscommunication and enhancing actionability.
Dhruv highlighted three pillars of effective platform engineering: a standardized SDLC model, secure best practices embedded in workflows, and the freedom for developers to use familiar tools. This last point, supported by a Forbes study from September 2023, underscores that happier developers, using tools they prefer, complete tasks 10% faster. By fostering collaboration and reducing context-switching, platform engineering creates an environment where developers can focus on innovation rather than operational overhead.
AI as a Catalyst for Optimization
Artificial intelligence plays a pivotal role in amplifying platform engineering’s impact. Dhruv explained that AI’s value lies not in generating code but in filtering noise and optimizing practices. By leveraging a robust SDLC data model, AI can provide actionable insights, provided it is fed high-quality data. For instance, AI-driven testing can prioritize time-intensive issues, streamline QA processes, and run only relevant tests based on code changes, reducing costs and feedback cycles. Dhruv cited examples like AI agents identifying vulnerabilities in code components or assessing risks in production ecosystems, automating fixes where appropriate.
He also introduced the Model Context Protocol (MCP), an open standard that enables applications to provide context to large language models, enhancing AI’s ability to deliver precise recommendations. From troubleshooting CI/CD pipelines to onboarding new developers, AI, when integrated with platform engineering, empowers teams to address bottlenecks and scale efficiently in a cloud-native world.
Empowering Developers and Securing the Future
Dhruv concluded by emphasizing that platform engineering, bolstered by AI, re-engages all actors in the software delivery process, from developers to leadership. By normalizing data across tools and providing metrics like DORA (DevOps Research and Assessment), IDPs offer visibility into bottlenecks and investment opportunities. This holistic approach not only secures the tech stack against supply chain attacks but also fosters a culture of productivity and developer satisfaction.
He encouraged attendees to explore CloudBees’ platform, which exemplifies these principles by breaking free from traditional platform limitations. Dhruv’s call to action urged developers to adopt platform engineering practices, leverage AI for optimization, and provide feedback to refine these evolving methodologies, ensuring a future where software delivery is both efficient and resilient.
Links:
[DevoxxFR2025] Dagger Modules: A Swiss Army Knife for Modern CI/CD Pipelines
Continuous Integration and Continuous Delivery (CI/CD) pipelines are the backbone of modern software development, automating the process of building, testing, and deploying applications. However, as these pipelines grow in complexity, they often become difficult to maintain, debug, and port across different execution platforms, frequently relying on verbose and platform-specific YAML configurations. Jean-Christophe Sirot, in his presentation, introduced Dagger as a revolutionary approach to CI/CD, allowing pipelines to be written as code, executable locally, testable, and portable. He explored Dagger Functions and Dagger Modules as key concepts for creating and sharing reusable, language-agnostic components for CI/CD workflows, positioning Dagger as a versatile “Swiss Army knife” for modernizing these critical pipelines.
The Pain Points of Traditional CI/CD
Jean-Christophe began by outlining the common frustrations associated with traditional CI/CD pipelines. Relying heavily on YAML or other declarative formats for defining pipelines can lead to complex, repetitive, and hard-to-read configurations, especially for intricate workflows. Debugging failures within these pipelines is often challenging, requiring pushing changes to a remote CI server and waiting for the pipeline to run. Furthermore, pipelines written for one CI platform (like GitHub Actions or GitLab CI) are often not easily transferable to another, creating vendor lock-in and hindering flexibility. This dependency on specific platforms and the difficulty in managing complex workflows manually are significant pain points for development and DevOps teams.
Dagger: CI/CD as Code
Dagger offers a fundamentally different approach by treating CI/CD pipelines as code. It allows developers to write their pipeline logic using familiar programming languages (like Go, Python, Java, or TypeScript) instead of platform-specific configuration languages. This brings the benefits of software development practices – such as code reusability, modularity, testing, and versioning – to CI/CD. Jean-Christophe explained that Dagger executes these pipelines using containers, ensuring consistency and portability across different environments. The Dagger engine runs the pipeline logic, orchestrates the necessary container operations, and manages dependencies. This allows developers to run and debug their CI/CD pipelines locally using the same code that will execute on the remote CI platform, significantly accelerating the debugging cycle.
Dagger Functions and Modules
Key to Dagger’s power are Dagger Functions and Dagger Modules. Jean-Christophe described Dagger Functions as the basic building blocks of a pipeline – functions written in a programming language that perform specific CI/CD tasks (e.g., building a Docker image, running tests, deploying an application). These functions interact with the Dagger engine to perform container operations. Dagger Modules are collections of related Dagger Functions that can be packaged and shared. Modules allow teams to create reusable components for common CI/CD patterns or specific technologies, effectively creating a library of CI/CD capabilities. For example, a team could create a “Java Build Module” containing functions for compiling Java code, running Maven or Gradle tasks, and building JAR or WAR files. These modules can be easily imported and used in different projects, promoting standardization and reducing duplication across an organization’s CI/CD workflows. Jean-Christophe demonstrated how to create and use Dagger Modules, illustrating their potential for building composable and maintainable pipelines. He highlighted that Dagger’s language independence means that modules can be written in one language (e.g., Python) and used in a pipeline defined in another (e.g., Java), fostering collaboration between teams with different language preferences.
The Benefits: Composable, Maintainable, Portable
By adopting Dagger, teams can create CI/CD pipelines that are:
– Composable: Pipelines can be built by combining smaller, reusable Dagger Modules and Functions.
– Maintainable: Pipelines written as code are easier to read, understand, and refactor using standard development tools and practices.
– Portable: Pipelines can run on any platform that supports Dagger and containers, eliminating vendor lock-in.
– Testable: Individual Dagger Functions and modules can be unit tested, and the entire pipeline can be run and debugged locally.
Jean-Christophe’s presentation positioned Dagger as a versatile tool that modernizes CI/CD by bringing the best practices of software development to pipeline automation. The ability to write pipelines in code, leverage reusable modules, and execute locally makes Dagger a powerful “Swiss Army knife” for developers and DevOps engineers seeking more efficient, reliable, and maintainable CI/CD workflows.
Links:
- Jean-Christophe Sirot: https://www.linkedin.com/in/jcsirot/
- Decathlon: https://www.decathlon.com/
- Dagger: https://dagger.io/
- Devoxx France LinkedIn: https://www.linkedin.com/company/devoxx-france/
- Devoxx France Bluesky: https://bsky.app/profile/devoxx.fr
- Devoxx France Website: https://www.devoxx.fr/
[DevoxxUK2025] Platform Engineering: Shaping the Future of Software Delivery
Paula Kennedy, co-founder and COO of Cintaso, delivered a compelling lightning talk at DevoxxUK2025, tracing the evolution of platform engineering and its impact on software delivery. Drawing from over a decade of experience, Paula explored how platforms have shifted from siloed operations to force multipliers for developer productivity. Referencing the journey from DevOps to PaaS to Kubernetes, she highlighted current trends like inner sourcing and offered practical strategies for assessing platform maturity. Her narrative, infused with lessons from the past and present, underscored the importance of a user-centered approach to avoid the pitfalls of hype and ensure platforms drive innovation.
The Evolution of Platforms
Paula began by framing platforms as foundations that elevate development, drawing on Gregor Hohpe’s analogy of a Volkswagen chassis enabling diverse car models. She recounted her career, starting in 2002 at Acturus, a SaaS provider with rigid silos between developers and operations. The DevOps movement, sparked in 2009, sought to bridge these divides, but its “you build it, you run it” mantra often overwhelmed teams. The rise of Platform-as-a-Service (PaaS), exemplified by Cloud Foundry, simplified infrastructure management, allowing developers to focus on code. However, Paula noted, the complexity of Kubernetes led organizations to build custom internal platforms, sometimes losing sight of the original value proposition.
Current Trends and Challenges
Today, platform engineering is at a crossroads, with Gartner predicting that by 2026, 80% of large organizations will have dedicated teams. Paula highlighted principles like self-service APIs, internal developer portals (e.g., Backstage), and golden paths that guide developers to best practices. She emphasized treating platforms as products, applying product management practices to align with user needs. However, the 2024 DORA report reveals challenges: while platforms boost organizational performance, they often fail to improve software reliability or delivery throughput. Paula attributed this to automation complacency and “platform complacency,” where trust in internal platforms leads to reduced scrutiny, urging teams to prioritize observability and guardrails.
Links:
[PHPForumParis2023] You Build It, You Run It: Observability for Developers – Smaïne Milianni
Smaïne Milianni, a former taxi driver turned PHP developer, delivered an engaging talk at Forum PHP 2023, exploring the “You Build It, You Run It” philosophy and the critical role of observability in modern development. Now an Engineering Manager at Yousign, Smaïne shared insights from his decade-long journey in PHP, emphasizing how observability tools like logs, metrics, traces, and alerts empower developers to maintain robust applications. His practical approach and humorous delivery offered actionable strategies for PHP developers to enhance system reliability and foster a culture of continuous improvement.
The Essence of Observability
Smaïne introduced observability as the cornerstone of the “You Build It, You Run It” model, where developers are responsible for both building and maintaining their applications. He explained how observability encompasses logs, metrics, traces, and custom alerts to monitor system health. Using real-world examples, Smaïne illustrated how these tools help identify issues, such as application errors or system outages, before they escalate. His emphasis on proactive monitoring resonated with developers seeking to ensure their PHP applications remain stable and performant.
Implementing Observability in PHP
Diving into practical applications, Smaïne outlined how to integrate observability into PHP projects. He highlighted tools like Datadog for collecting metrics and traces, and demonstrated how to set up alerts for critical incidents, such as P1 outages that trigger SMS and email notifications. Smaïne stressed the importance of prioritizing alerts based on severity to avoid notification fatigue. His examples, drawn from his experience at Yousign, provided a clear roadmap for developers to implement observability, ensuring rapid issue detection and resolution.
The Power of Post-Mortems
Smaïne concluded by emphasizing the role of post-mortems in fostering a virtuous cycle of improvement. Responding to an audience question, he explained how his team conducts weekly manager reviews to track post-mortem actions, ensuring they are prioritized and addressed. By treating errors as learning opportunities rather than failures, Smaïne’s approach encourages developers to refine their code and systems iteratively. His talk inspired attendees to adopt observability practices that enhance both technical reliability and team collaboration.
Links:
[DevoxxBE 2023] Introducing Flow: The Worst Software Development Approach in History
In a satirical yet insightful closing keynote at Devoxx Belgium 2023, Sander Hoogendoorn and Kim van Wilgen, seasoned software development experts, introduced “Flow,” a fictional methodology designed to expose the absurdities of overly complex software development practices. With humor and sharp critique, Sander and Kim drew from decades of experience to lampoon methodologies like Waterfall, Scrum, SAFe, and Spotify, blending real-world anecdotes with exaggerated principles to highlight what not to do. Their talk, laced with wit, ultimately transitioned to earnest advice, advocating for simplicity, autonomy, and human-centric development. This presentation offers a mirror to the industry, urging developers to critically evaluate methodologies and prioritize effective, enjoyable work.
The Misadventure of Methodologies
Sander kicked off with a historical detour, debunking the myth of Waterfall’s rigidity. Citing Winston Royce’s 1970 paper, he revealed that Waterfall was meant to be iterative, allowing developers to revisit phases—a concept ignored for decades, costing billions. This set the stage for Flow, a methodology born from a tongue-in-cheek desire to maximize project duration for consultancy profits. Kim explained how they cherry-picked the worst elements from existing frameworks: endless sprints from Scrum, gamification to curb autonomy, and an alphabet soup of roles from SAFe.
Their critique was grounded in real-world failures. Sander shared a Belgian project where misestimated sprints and 300 outsourced developers led to chaos, exacerbated by documentation in Dutch and French. Kim highlighted how methodologies like SAFe balloon roles, sidelining customers and adding complexity. By naming Flow with trendy buzzwords—Kaizen, continuous disappointment, and pointless—they mocked the industry’s obsession with jargon over substance.
The Flow Framework: A Recipe for Dysfunction
Flow’s principles, as Sander and Kim outlined, are deliberately counterproductive. Sprints, renamed “mini-Waterfalls,” ensure repeated failures, with burn charts (not burn-down charts) showing growing work without progress. Meetings, dubbed “Flow meetings,” are scheduled to disrupt developers’ focus, with random topics and high-placed interruptions—like a 2.5-meter-tall CEO bursting in. Kim emphasized gamification, stripping teams of real autonomy while offering trivial perks like workspace decoration, exemplified by a ball pit job interview at a Dutch e-commerce firm.
The Flow Manifesto, a parody of the Agile Manifesto, prioritizes “extensive certification over hands-on experience” and “meetings over focus.” Sander recounted a project in France with a 20-column board so confusing that even AI couldn’t decipher its French Post-its. Jira, mandatory in Flow, becomes a tool for obfuscation, with requirements buried in lengthy tickets. Open floor plans and Slack further stifle communication, with “pair slacking” replacing collaboration, ensuring developers remain distracted and disconnected.
Enterprise Flow: Scaling the Absurdity
In large organizations, Flow escalates into the Big Flow Framework (BFF), starting at version 3.0 to sound innovative. Kim critiqued the blind adoption of Spotify’s model, designed for 8x annual growth, which saddles banks with excessive managers—sometimes a 1:1 ratio with developers. Sander recounted a client renaming managers as “tech leads,” adding 118 unnecessary roles to a release train. Certifications, costing €10,000 per recertification, parody the industry’s profit-driven training schemes.
Flow’s tooling, like boards with incomprehensible columns and Jira’s dual Scrum-Kanban confusion, ensures clients remain baffled. Kim highlighted how Enterprise Flow thrives on copying trendy startups like Basecamp, debating irrelevant issues like banning TypeScript or leaving public clouds. Research, they noted, shows no methodology—including SAFe or LeSS—outperforms having none, underscoring Flow’s satirical point: complexity breeds failure.
A Serious Turn: Principles for Better Development
After the laughter, Sander and Kim pivoted to their true beliefs, advocating for a human-centric approach. Software, they stressed, is built by people, not tools or methodologies. Teams should evolve their own practices, using Scrum or Kanban as starting points but adapting to context. Face-to-face communication, trust, and psychological safety are paramount, as red sprints and silencing voices drive talent away.
Focus is sacred, requiring quiet spaces and flexible hours, as ideas often spark outside 9–5. Continuous learning, guarded by dedicating at least one day weekly, prevents stagnation. Autonomy, though initially uncomfortable, empowers teams to make decisions, as Sander’s experience with reluctant developers showed. Flat organizations with minimal hierarchy foster trust, while experienced developers, like those born in the ’60s and ’70s, mentor through code reviews rather than churning out code.
Conclusion: Simplicity and Joy in Development
Sander and Kim’s Flow is a cautionary tale, urging developers to reject bloated methodologies and embrace simplicity. By reducing complexity, as Albert Einstein suggested, teams can deliver value effectively. Above all, they reminded the audience to have fun, celebrating software development as the best industry to be in. Their talk, blending satire with wisdom, inspires developers to craft methodologies that empower people, foster collaboration, and make work enjoyable.
Links:
Hashtags: #SoftwareDevelopment #Agile #Flow #Methodologies #DevOps #SanderHoogendoorn #KimVanWilgen #SchubergPhilis #iBOOD #DevoxxBE2023
[NodeCongress2021] Machine Learning in Node.js using Tensorflow.js – Shivay Lamba
The fusion of machine learning capabilities with server-side JavaScript environments opens intriguing avenues for developers seeking to embed intelligent features directly into backend workflows. Shivay Lamba, a versatile software engineer proficient in DevOps, machine learning, and full-stack paradigms, illuminates this intersection through his examination of TensorFlow.js within Node.js ecosystems. As an open-source library originally developed by the Google Brain team, TensorFlow.js democratizes access to sophisticated neural networks, allowing practitioners to train, fine-tune, and infer models without forsaking the familiarity of JavaScript syntax.
Shivay’s narrative commences with the foundational allure of TensorFlow.js: its seamless portability across browser and Node.js contexts, underpinned by WebGL acceleration for tensor operations. This universality sidesteps the silos often encountered in traditional ML stacks, where Python dominance necessitates cumbersome bridges. In Node.js, the library harnesses native bindings to leverage CPU/GPU resources efficiently, enabling tasks like image classification or natural language processing to unfold server-side. Shivay emphasizes practical onboarding—install via npm, import tf, and instantiate models—transforming abstract algorithms into executable logic.
Consider a sentiment analysis endpoint: load a pre-trained BERT variant, preprocess textual inputs via tokenizers, and yield probabilistic outputs—all orchestrated in asynchronous handlers to maintain Node.js’s non-blocking ethos. Shivay draws from real-world deployments, where such integrations power recommendation engines or anomaly detectors in e-commerce pipelines, underscoring the library’s scalability for production loads.
Streamlining Model Deployment and Inference
Deployment nuances emerge as Shivay delves into optimization strategies. Quantization shrinks model footprints, slashing latency for edge inferences, while transfer learning adapts pre-trained architectures to domain-specific corpora with minimal retraining epochs. He illustrates with a convolutional neural network for object detection: convert ONNX formats to TensorFlow.js via converters, bundle with webpack for serverless functions, and expose via Express routes. Monitoring integrates via Prometheus metrics, tracking inference durations and accuracy drifts.
Challenges abound—memory constraints in containerized setups demand careful tensor management, mitigated by tf.dispose() invocations. Shivay advocates hybrid approaches: offload heavy training to cloud TPUs, reserving Node.js for lightweight inference. Community extensions, like @tensorflow/tfjs-node-gpu, amplify throughput on NVIDIA hardware, aligning with Node.js’s event-driven architecture.
Shivay’s exposition extends to ethical considerations: bias audits in datasets ensure equitable outcomes, while federated learning preserves privacy in distributed training. Through these lenses, TensorFlow.js transcends novelty, evolving into a cornerstone for ML-infused Node.js applications, empowering creators to infuse intelligence without infrastructural overhauls.
Links:
[DevoxxPL2022] Data Driven Secure DevOps – Deliver Better Software, Faster! • Raveesh Dwivedi
Raveesh Dwivedi, a digital transformation expert from HCL Technologies, captivated the Devoxx Poland 2022 audience with a compelling exploration of data-driven secure DevOps. With over a decade of experience at HCL, Raveesh shared insights on how value stream management (VSM) can transform software delivery, aligning IT efforts with business objectives. His presentation emphasized eliminating inefficiencies, enhancing governance, and leveraging data to deliver high-quality software swiftly. Through a blend of strategic insights and a practical demonstration, Raveesh showcased how HCL Accelerate, a VSM platform, empowers organizations to optimize their development pipelines.
The Imperative of Value Stream Management
Raveesh opened by highlighting a common frustration: business stakeholders often perceive IT as a bottleneck, blaming developers for delays. He introduced value stream management as a solution to bridge this gap, emphasizing its role in mapping the entire software delivery process from ideation to production. By analyzing a hypothetical 46-week delivery cycle, Raveesh revealed that 80% of the time—approximately 38 weeks—was spent waiting in queues due to resource constraints or poor prioritization. This inefficiency, he argued, could cost businesses millions, using a $200,000-per-week feature as an example. VSM addresses this by identifying bottlenecks and quantifying the cost of delays, enabling better decision-making and prioritization.
Raveesh explained that VSM goes beyond traditional DevOps automation, which focuses on continuous integration, testing, and delivery. It incorporates the creative aspects of agile development, such as ideation and planning, ensuring a holistic view of the delivery pipeline. By aligning IT processes with business value, VSM fosters a cultural shift toward business agility, where decisions prioritize urgency and impact. Raveesh’s narrative underscored the need for organizations to move beyond siloed automation and embrace a system-wide approach to software delivery.
Leveraging HCL Accelerate for Optimization
Central to Raveesh’s presentation was HCL Accelerate, a VSM platform designed to visualize, govern, and optimize DevOps pipelines. He described how Accelerate integrates with existing tools, pulling data into a centralized data lake via RESTful APIs and pre-built plugins. This integration enables real-time tracking of work items as they move from planning to deployment, providing visibility into bottlenecks, such as prolonged testing phases. Raveesh demonstrated how Accelerate’s dashboards display metrics like cycle time, throughput, and DORA (DevOps Research and Assessment) indicators, tailored to roles like developers, DevOps teams, and transformation leaders.
The platform’s strength lies in its ability to automate governance and release management. For instance, it can update change requests automatically upon deployment, ensuring compliance and traceability. Raveesh showcased a demo featuring a loan processing value stream, where work items appeared as dots moving through phases like development, testing, and deployment. Red dots highlighted anomalies, such as delays, detected through AI/ML capabilities. This real-time visibility allows teams to address issues proactively, ensuring quality and reducing time-to-market.
Enhancing Security and Quality
Security and quality were pivotal themes in Raveesh’s talk. He emphasized that HCL Accelerate integrates security scanning and risk assessments into the pipeline, surfacing results to all stakeholders. Quality gates, configurable within the platform, ensure that only robust code reaches production. Raveesh illustrated this with examples of deployment frequency and build stability metrics, which help teams maintain high standards. By providing actionable insights, Accelerate empowers developers to focus on delivering value while mitigating risks, aligning with the broader goal of secure DevOps.
Cultural Transformation through Data
Raveesh concluded by advocating for a cultural shift toward data-driven decision-making. He argued that while automation is foundational, the creative and collaborative aspects of DevOps—such as cross-functional planning and stakeholder alignment—are equally critical. HCL Accelerate facilitates this by offering role-based access to contextualized data, enabling teams to prioritize features based on business value. Raveesh’s vision of DevOps as a bridge between IT and business resonated, urging organizations to adopt VSM to achieve faster, more reliable software delivery. His invitation to visit HCL’s booth for further discussion reflected his commitment to fostering meaningful dialogue.