[VoxxedDaysTicino2026] The Past, Present, and Future of Programming Languages
Lecturer
Kevlin Henney is an independent consultant, trainer, and author specializing in software architecture, programming paradigms, and agile practices. He has contributed to numerous books, including “97 Things Every Programmer Should Know,” and is a frequent speaker at international conferences. Kevlin’s work spans decades, influencing developers through his insights on language evolution and design patterns. Relevant links include his X account (https://x.com/kevlinhenney) and Mastodon (https://mastodon.social/@kevlinhenney).
Abstract
This article analyzes Kevlin Henney’s exploration of programming languages’ historical trajectory, current state, and prospective developments. It dissects paradigms, influences, and biases shaping language adoption, emphasizing slow evolution despite rapid technological hype. Through data-driven analysis and historical anecdotes, it underscores the dominance of 20th-century languages, the assimilation of functional features into mainstream ones, and AI’s reinforcing role, offering implications for future trends.
Historical Foundations and Paradigm Shifts
Programming languages bridge hardware and human cognition, embodying philosophies for structuring thoughts and systems. Kevlin traces their origins to the 1950s, with Fortran as an experimental compiler challenging beliefs that high-level languages couldn’t match assembly efficiency. John Backus’s team at IBM proved otherwise, unleashing a “virus” that normalized compilation.
By 1977, Backus questioned liberation from the “von Neumann style”—imperative models mimicking memory storage, jumps, and assignments. He advocated functional styles with program algebras, introducing “style” before Robert Floyd’s 1978 formalization of paradigms. Paradigms, borrowed from other disciplines, frame programming approaches: imperative, functional, logic.
Historical influences abound; Algol 68, despite limited adoption, pioneered constructs like if-then-else as expressions, impacting modern syntax. Kevlin highlights languages’ slow pace: mainstream ones still integrate decades-old ideas, with developers embracing “new” features older than themselves.
This context reveals languages as ecosystems defining skills, communities, and loyalties, evolving gradually amid technological progress.
Current Landscape: Dominance and Biases
Contemporary rankings like TIOBE and RedMonk illustrate stasis. TIOBE’s January 2026 top 10 features Python leading, followed by C, Java, C++, and others—all 20th-century except Go. Skewed distributions show Python’s dominance, with top-five accounting for nearly 60% of activity.
RedMonk, biased toward Stack Overflow and GitHub, elevates TypeScript but confirms 20th-century prevalence. Even gRPC-supported languages skew vintage. Kevlin notes human statistical misconceptions: top-10 lists appear linear, but power laws dominate, amplifying incumbents.
Biases perpetuate this: legacy code bases influence employment and evolution, with languages borrowing features (e.g., lambdas from 1930s lambda calculus) to retain users. Java’s 2014 lambdas postdate C++’s; JavaScript popularized them, but Lisp implemented in 1960.
Paradigms blend: few pure functional languages in top-20; most hybridize, raiding functional concepts (lambdas, map-reduce) without full adoption. SQL, a declarative logic language, exemplifies non-functional declarativeness, rewritten as comprehensions in Python or Haskell.
Excel, per Simon Peyton Jones, is the most popular functional language, with 2020 lambdas (now in Google Sheets) adding calculus. This assimilation dilutes paradigms; functional programming peaked a decade ago, its ideas mainstreamed.
AI’s Influence on Language Evolution
Artificial intelligence reinforces biases. Early Lisp dominance in symbolic AI gave way to neural networks and machine learning in the 1980s-1990s. Modern LLMs, statistical at core, excel in languages with abundant data: JavaScript, Python, TypeScript.
Anders Hejlsberg observes AI’s proficiency proportional to exposure, disadvantaging new languages. LLMs default to mainstream, using Python for tasks like counting ‘R’s in “strawberry”—orchestrating code where reasoning falters.
Implications: AI makes languages “irrelevant” yet crucial, as defaults bias toward past dominants. Orchestration (e.g., Gemini writing Python) joins developers’ statistical set, perpetuating incumbents.
Future Trajectories and Constraints
Future predictions defy certainty, but trends suggest continuity. Change lags expectations; quantum computing remains niche, irrelevant to mainstream for decades.
Functional programming won’t dominate; von Neumann imperatives persist. AI amplifies long tails—easier language creation—but cores stabilize. Notations could innovate, per Richard Feynman, but comfort favors sharing existing ones.
William Faulkner’s quote—”The past is never dead. It’s not even past”—encapsulates: legacies endure, shaped by data, communities, and AI.
In conclusion, languages evolve slowly, assimilating ideas while incumbents dominate, with AI entrenching this amid potential for niche proliferation.
Links:
[MiamiJUG] Taming Vulnerabilities and Technical Debt Through Deterministic Refactoring
Lecturer
Kevin Brockhoff is a Director and Consulting Expert at CGI, one of the world’s largest IT and business consulting firms. With decades of experience in the technology industry, Kevin specializes in navigating the complex intersections of cybersecurity, digital transformation, and large-scale enterprise systems. His work at CGI involves helping multinational organizations—spanning sectors such as banking, government, and manufacturing—modernize their legacy infrastructure while maintaining robust security postures. Kevin is a prominent voice in the Miami technology community, frequently sharing insights at the Miami Java User Group (MiamiJUG) regarding automated refactoring and the integration of generative AI in software engineering.
Abstract
As enterprises face an accelerating stream of feature requests and increasingly sophisticated cyber threats, the accumulation of technical debt and security vulnerabilities has become a critical bottleneck. This article examines a deterministic approach to large-scale code remediation using OpenRewrite, an open-source automated refactoring ecosystem. Unlike indeterminate generative AI agents, which can produce inconsistent results and hallucinations, OpenRewrite utilizes Lossless Semantic Trees (LSTs) to ensure predictable, traceable, and scalable code transformations. By combining the creative potential of AI with the reliability of rule-based transformers, organizations can achieve a fourfold increase in productivity for vulnerability remediation. The following analysis explores the methodology of LST-based refactoring, its application across thousands of repositories, and its strategic role in modernizing global IT infrastructure.
The Crisis of Speed and Indeterminacy in Enterprise Software
In the modern software landscape, engineering teams are caught in a perpetual race between delivering new features and mitigating emerging security risks. Kevin emphasizes that speed is the decisive factor in this environment; delays in remediation allow vulnerabilities to proliferate across growing application portfolios. While generative AI agents have been proposed as a solution to this problem, they introduce significant challenges when applied in isolation at an enterprise scale.
The primary issue with relying solely on Large Language Models (LLMs) for code refactoring is their indeterminate nature. Applying an AI agent to the same codebase multiple times may yield different results, and the risk of “hallucinations” necessitates a manual human review of every line of code. Furthermore, current AI tools often struggle with scalability; while they may function effectively on a single repository, managing transformations across 5,000 repositories requires a more structured, traceable mechanism.
OpenRewrite: Deterministic Refactoring via Lossless Semantic Trees
To address the limitations of AI, Kevin advocates for the use of OpenRewrite, a tool sponsored by Moderne that provides a deterministic framework for source code modification. At the heart of OpenRewrite is the Lossless Semantic Tree (LST). While a traditional Abstract Syntax Tree (AST) represents the hierarchical structure of code, the LST incorporates two additional layers of critical information:
- Type Information: Every node in the tree is enriched with comprehensive type data, similar to the output of a compiler.
- Formatting Preservation: Uniquely, the LST captures all original formatting, including whitespace and comments.
This architecture allows OpenRewrite to parse code, apply transformations, and write it back to the source file with character-for-character fidelity to the original style, provided no changes were intended. Most importantly, these modifications are deterministic; a “recipe”—the rule-based transformer used by the engine—will produce identical results every time it is applied, enabling mass application across thousands of repositories without the need for exhaustive manual re-verification.
Methodology: Combining AI with Rule-Based Transformers
The most effective strategy for large-scale remediation involves a hybrid approach that leverages both AI and deterministic tools. In this model, AI agents are used to assist human developers in generating the refactoring recipes themselves. Once a recipe is refined and tested, it acts as a reliable, version-controlled asset that can be executed at scale.
OpenRewrite’s ecosystem is divided into open-source and commercial components. The core engine and a vast catalog of common recipes—covering framework migrations (such as Spring Boot upgrades), security fixes, and stylistic consistency—are available under the Apache license. For large-scale enterprise management, the Moderne platform provides advanced capabilities, including:
- SaaS and On-Premise (DX) Options: These allow for mass refactoring across an entire organization’s source code system.
- Semantic Search: By calculating embeddings on LSTs, the platform enables highly sophisticated code intelligence and search.
- Batch Remediation Tracking: A centralized dashboard for managing the progress of large-scale security and tech debt campaigns.
Implementation and Impact
The practical application of these tools has demonstrated a 4X increase in productivity for security vulnerability remediation at major corporations. Beyond security, use cases include technical modernization, library upgrades, and maintaining architectural standards. By automating the “grunt work” of refactoring, senior engineers can focus on higher-level architectural decisions while the deterministic engine ensures that thousands of microservices remain up-to-date with the latest security patches and framework versions.
Relevant links and hashtags:
[DevoxxBE2025] Backlog.md: Reaching 95% Task Success Rate with AI Agents
Lecturer
Alex Gavrilescu is the developer of Backlog.md, a command-line utility for AI-enhanced project oversight, with a history in program creation and mobile advancement. He emphasizes processes that elevate AI task accomplishment, derived from personal ventures in auxiliary initiatives.
Abstract
This examination follows the progression from preliminary AI scripting setbacks to a polished arrangement attaining near-flawless duty fulfillment through Backlog.md. It clarifies notions like specification-guided creation and agent coordination, placed amid initial cue deficiencies. Emphasizing tactics for background supplying and archetype choice, it scrutinizes effects on output, particularly in disconnected settings. The exploration furnishes profundity on moving to AI-primary oversight, stressing functional inventories and mergers.
Preliminary Difficulties with AI Aid
Early AI endeavors, such as applying Claude to repositories, frequently faltered owing to “bare” cues deficient in background, yielding more corrections than advancements. Fulfillment percentages lingered at 50%, hampered by repository disorder and partial comprehension.
Placed: AI excitement vowed mechanization, but truths disclosed requirements for organized entries. Procedurally, appending background documents elevated percentages to 75%, as agents acquired essential particulars.
Ramifications: Inferior arrangements squander duration; methodical tactics transform AI into dependable supports.
Polishing Processes for Elevated Fulfillment
Backlog.md organizes duties as Markdown documents in repositories, permitting parallelization and agent handling. CLI illustrations convert phrases into duties:
backlog init
backlog add "Construct user verification"
backlog run
Agents scheme, enact, assess. Archetype contrasts: Claude for deduction, Codex for scripting, Jules for advantages.
Scrutiny: Inventories determine agent functions—Claude schemes, Codex enacts. Ramifications: 95% fulfillment via coordination.
Mobile-Exclusive and Merger Tactics
Mobile-exclusive processes test portability: CLI permits duty oversight sans workstations. Real-time merges from mobiles illustrate adaptability.
Procedurally, synchronizing with GitHub matters broadens utility, albeit intricate.
Ramifications: AI permits “ubiquitous” creation, enhancing auxiliary initiatives.
Deployment Preparedness and Prospective Boosts
Backlog.md attains elevated percentages via specifications, not supplanting instruments like Jira but supplementing for agents.
Prospective: GR mergers for enterprise.
In overview, organized AI processes revolutionize creation, optimizing fulfillment.
Links:
- Lecture video: https://www.youtube.com/watch?v=LSoDQU_9MMA
- Alex Gavrilescu on Twitter/X: https://twitter.com/H3xx3n
[GoogleIO2025] What’s new in Go
Keynote Speakers
Cameron Balahan serves as the Group Product Manager and lead for the Go programming language at Google, overseeing its strategic development and integration within cloud ecosystems. With a background from The George Washington University, he focuses on enhancing developer productivity and scaling tools for mission-critical applications.
Marc Dougherty functions as the lead for Developer Relations in Go at Google, bridging the community with advancements in the language. His expertise lies in site reliability engineering turned developer advocacy, emphasizing practical implementations for reliable software systems.
Abstract
This scholarly examination probes the recent evolutions in the Go programming language, particularly version 1.24, spotlighting enhancements in cryptography, type systems, and runtime efficiency. It dissects foundational principles guiding Go’s design, methodologies for AI infrastructure integration, and forward-looking initiatives like SIMD optimizations. Through code demonstrations and contextual analyses, the narrative evaluates implications for scalable, secure software engineering, underscoring Go’s role in contemporary cloud and generative AI landscapes.
Foundational Principles and Historical Context
Cameron Balahan and Marc Dougherty commence by delineating Go’s origins, conceived over 15 years ago at Google to reconcile productivity in dynamic languages with the robustness of compiled ones. Balahan articulates Go’s ethos: a language engineered for scalability from inception, addressing modern software architectures, operational environments, and collaborative teams. This premise manifests in three pillars: productivity through simplicity and readability; a holistic developer ecosystem spanning IDE to deployment; and production readiness emphasizing reliability, efficiency, and security.
Contextually, Go emerged amid Google’s challenges in maintaining vast systems, evolving into a cornerstone of cloud infrastructure. Dougherty highlights its adoption in pivotal technologies like Kubernetes and Docker, attributing this to inherent cloud-native features rather than retrofits. User satisfaction metrics, exceptionally high, reflect this alignment, with Go’s growth surpassing developer population trends.
The discourse transitions to version 1.24’s innovations, building on 1.23’s iterator additions and runtime telemetry. Balahan explains post-quantum cryptography integration, fortifying against quantum threats via hybrid key exchanges in TLS. This methodology combines classical and quantum-resistant algorithms, ensuring forward compatibility without immediate overhauls.
Type alias generics, now fully supported, enhance code modularity by permitting aliases with type parameters, facilitating incremental migrations in large codebases. Runtime optimizations, including profile-guided enhancements, reduce CPU overhead by 2-3%, optimizing garbage collection and scheduling for high-throughput scenarios.
Implications extend to enterprise adoption, where Go’s backward compatibility—unchanged since version 1.0—assures long-term stability, contrasting with languages prone to breaking changes.
AI Infrastructure and Generative Applications
Dougherty pivots to Go’s burgeoning role in AI, leveraging its concurrency model and efficiency for infrastructure like vector databases and serving frameworks. He posits Go’s simplicity as ideal for AI’s rapid evolution, where readable code withstands complexity.
Methodologies for AI workloads involve embedding models and vector stores, demonstrated via integrations with Gemini and Weaviate. Code samples illustrate query handling:
func handleQuery(query string) {
// Embed query using Gemini
embedding := gemini.Embed(query)
// Query Weaviate via GraphQL
docs := weaviate.Query(embedding)
// Generate response
response := gemini.Generate(docs)
}
Frameworks like LangChain Go and Firebase Genkit abstract LLM and database interactions, promoting modularity. Genkit’s observability tools enhance debugging in production.
Contextually, Go’s provenance in cloud-native tools positions it for AI’s distributed nature, implying reduced latency in inference pipelines. Implications include seamless migrations amid technological shifts, bolstered by interfaces and embedding.
Future Directions and Community Ecosystem
Balahan outlines forthcoming enhancements in Go 1.25, emphasizing SIMD for vectorized operations crucial to AI optimizations. Multi-core advancements target non-uniform memory access, refining garbage collection for modern hardware.
Language polish focuses on generic flexibility, with community discussions on GitHub informing iterations. Compatibility remains sacrosanct, ensuring legacy code viability.
The ecosystem’s vitality—robust libraries for AI, vibrant meetups—underscores collaborative growth. Dougherty credits community contributions for Go’s relevance, implying sustained innovation through open-source synergy.
Analytically, these trajectories affirm Go’s adaptability, with implications for AI-driven economies where efficient, secure languages predominate.
Links:
[AWSReInvent2025] Revolutionizing DevSecOps: How Cathay Pacific Achieved 75% Faster Security with Agentic AI
Lecturer
Mike Markell is a Practice Manager for AWS Professional Services in Hong Kong, where he leads digital transformation and security initiatives for major enterprises across Asia. Naresh Sharma is a senior technology leader at Cathay Pacific Airways, overseeing the airline’s global application security and DevSecOps strategy. Tony Leong is a Senior Security Architect at Cathay, specialized in building AI-powered security tooling and integrating AppSec-as-Code into high-velocity deployment pipelines.
Abstract
In the highly regulated and high-stakes environment of global aviation, managing security across more than 4,000 annual deployments presents a massive operational challenge. This article details how Cathay Pacific Airways revolutionized its “security-first” culture by moving beyond traditional security scanning to a comprehensive DevSecOps model. The core methodology centers on the implementation of Agentic AI and a RAG-based (Retrieval-Augmented Generation) assistant to solve the industry’s “false positive crisis.” By deploying “AI-powered security champions” and customized scanning rules, Cathay achieved a 75% reduction in vulnerability remediation time and a 50% reduction in security operations costs. The analysis explores the technical and cultural shifts required to empower over 1,000 developers to become proactive security practitioners while maintaining the airline’s rapid pace of innovation.
Context: The Bottleneck of Manual Security Reviews
For a global leader like Cathay Pacific, the pace of digital innovation is essential for maintaining a competitive edge in the aviation industry. However, this speed was being severely hindered by the limitations of traditional security scanning tools. The primary conflict centered on a high noise-to-signal ratio, where approximately 78% of the vulnerabilities identified by standard tools were determined to be false positives. This created a crisis where security teams were overwhelmed by alerts, leading to significant delays in the deployment of features for the airline’s fleet.
Furthermore, the manual review process required to validate these alerts created significant friction between the security and development teams. Developers often viewed security requirements as a hurdle that slowed down their ability to deliver value, while security professionals struggled to keep up with the volume of code being produced. To overcome these challenges, Cathay needed a solution that could scale with their deployment frequency—which covers everything from customer-facing apps to critical flight operation systems—without compromising on the rigorous safety standards that define the brand.
Methodology: Implementing Shift-Left Security with AI
The solution implemented by Cathay Pacific and AWS Professional Services involved a comprehensive “shift-left” strategy, which integrates security at the very beginning of the software development lifecycle. The cornerstone of this methodology is the use of Agentic AI. Unlike traditional static scanners, these AI agents act as “security champions” that provide real-time, context-aware guidance to developers as they write code. This allows for the identification of security anti-patterns and the suggestion of defensive coding practices before the code is even committed to a repository.
Another critical component of the methodology is the AppSec-as-Code library. This centralized knowledge base translates complex security policies into programmatic requirements that can be automatically enforced within CI/CD pipelines. To make this information accessible to developers, the team developed a RAG-based (Retrieval-Augmented Generation) assistant. This tool allows developers to query internal security standards using natural language, receiving accurate and context-specific advice instantly. Finally, the team moved away from “out of the box” tool configurations in favor of highly customized scanning rules. This technical fine-tuning was essential for drastically reducing the false-positive rate and ensuring that the security team only focused on legitimate threats.
Technical Analysis of Operational Gains
The implementation of AI-driven DevSecOps has yielded remarkable quantitative results for Cathay Pacific. The most significant outcome is a 75% reduction in the time required to remediate vulnerabilities. Because the AI agents filter out the vast majority of false positives and provide developers with clear, actionable fix suggestions, the entire security lifecycle has been compressed. Qualitatively, this has led to a 70% improvement in developer security capability, as the tools effectively serve as an automated, on-the-job training system that reinforces secure coding habits.
From a financial perspective, the automation of manual reviews and the reduction in wasted engineering time have led to a 50% cost reduction in security operations. The airline is now able to manage over 4,000 deployments annually with a higher level of confidence and lower overhead than was previously possible. A critical technical lesson learned during the journey was that “by default, no tool is perfect.” Success required a commitment to continuous customization and a willingness to collaborate with product vendors to tune their tools to the specific needs of the aviation industry. This iterative feedback loop was the key to moving from “human-in-the-loop” automation to a more efficient “AI-informed” model.
Consequences: A Cultural and Technical Transformation
The transformation at Cathay Pacific extended far beyond the technical architecture; it required a fundamental shift in the organization’s culture. The success of the project was predicated on a “can-do” spirit and the setting of ambitious targets that challenged the status quo. By providing developers with the tools to take ownership of security, the organization has fostered a culture where security is seen as a shared responsibility rather than an external constraint.
The implications for the global aviation and enterprise sectors are significant. Cathay has proven that it is possible to maintain a high-velocity deployment schedule in a safety-critical environment by leveraging the power of generative AI. Looking forward, the organization plans to develop even more insightful dashboards to provide security leaders with real-time visibility into the health of the application portfolio. The journey serves as a powerful testament to how Agentic AI can bridge the gap between agility and security, turning a potential bottleneck into a powerful competitive advantage.
Links:
[SpringIO2025] Panta rhei: runtime configuration updates with Spring Boot by Joris Kuipers
Lecturer
Joris Kuipers is the CTO and hands-on architect at Trifork Amsterdam, with 25 years of experience in software engineering, enterprise Java consulting, and architecture. Specializing in Spring Boot, he focuses on observability, JSON processing, and dynamic configuration. Kuipers is an active speaker at conferences like Spring I/O, contributing insights on production-ready applications and performance optimizations.
Abstract
This article explores Spring’s mechanisms for dynamic configuration reloads in Boot applications, enabling runtime updates without restarts. It delineates reloadable elements like logging configurations, @ConfigurationProperties beans, and @RefreshScope-annotated components. The analysis covers trigger mechanisms, supported property sources, and considerations for production deployment, including Kubernetes integrations and potential pitfalls.
Foundations of Dynamic Configuration in Spring
Configuration in Spring Boot applications is environment-specific, allowing a single build to adapt via external property sources like files, classpath resources, or remote servers. Traditionally read at startup, changes necessitate restarts, leading to downtime, loss of in-memory state (e.g., caches), and JVM warm-up delays, which can extend from seconds to hours for complex integrations.
Spring Cloud Context introduces reload capabilities, exposing writable actuator endpoints for ephemeral updates and refresh triggers for persistent sources. Posting to the /actuator/env endpoint rebinds @ConfigurationProperties beans and updates logging levels, though changes revert on restart. The /actuator/refresh endpoint, when triggered, reloads external configurations, rebinding properties without full context restarts.
Demo applications illustrate this: a simple MVC controller injects mutable and immutable @ConfigurationProperties classes, demonstrating value updates via getters to ensure visibility.
Trigger Mechanisms and Reloadable Components
Reloads can be manual (POST to /actuator/refresh) or automated via change detection in property sources. @ConfigurationProperties beans rebind automatically, but direct field access in mutable classes may cache stale values—always use getters.
@RefreshScope proxies beans, destroying and recreating them on refresh, useful for stateful components like data sources. However, it incurs overhead and requires careful management to avoid disrupting dependencies.
Logging configurations reload dynamically, altering levels without restarts. @Value annotations, while injectable, do not rebind automatically unless scoped.
Code for enabling refresh:
@SpringBootApplication
@RefreshScope // Optional for specific beans
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
Supported Property Sources and Kubernetes Integrations
Property sources vary in reload support: file-based (e.g., application.properties) require manual triggers, while remote sources like Consul enable automatic detection via polling (e.g., every 30 seconds).
In Kubernetes, ConfigMaps and Secrets mount as files or environment variables. Spring Cloud Kubernetes Config Reload detects changes, triggering refreshes. Configuration involves enabling reload mode (e.g., polling) and setting intervals.
Example properties:
spring.cloud.kubernetes.config.enabled=true
spring.cloud.kubernetes.reload.enabled=true
spring.cloud.kubernetes.reload.mode=polling
spring.cloud.kubernetes.reload.period=30s
Delays in propagation (e.g., 30+ seconds) necessitate tuning to avoid partial updates.
Practical Considerations and Best Practices
Dynamic reloads suit credential rotations or feature flags but require securing actuators to prevent denial-of-service. Avoid Hikari for refresh-scoped data sources due to connection issues; alternatives like Tomcat JDBC work better.
CRaC (Checkpoint/Restore) combines with reloads for fast startups with dynamic configs, but GraalVM is unsupported. Validate via /actuator/env and /actuator/configprops; test for binding errors.
In conclusion, runtime updates enhance availability and efficiency, demanding rigorous testing to mitigate risks like incomplete propagations.
Links:
[AWSReInventPartnerSessions2024] Simulate Amazon Q query for code generation
generated_code = f"def process_data(data):\n return sorted(data)\n"
return generated_code
def test_code(code, test_cases):
exec(code)
for case in test_cases:
input_data, expected = case
result = process_data(input_data)
if result != expected:
return False
return True
[NDCMelbourne2025] Front End Testing with GitHub Actions – Amy Kapernick
In a dynamic session at NDC Melbourne 2025, Amy Kapernick, a seasoned front-end developer and advocate for automation, unveils a streamlined approach to front-end testing using GitHub Actions. With a focus on practicality, Amy guides developers through constructing a robust continuous integration and continuous deployment (CI/CD) pipeline, ensuring that front-end tests run seamlessly against live websites. Her presentation underscores the necessity of automation to maintain quality in web development, offering actionable insights for teams seeking to integrate testing into their workflows without manual intervention.
The Imperative of Front-End Testing
Amy begins by highlighting the unique challenges of front-end testing, emphasizing that unlike unit tests, which can operate with dummy data, front-end tests require a live, functioning website to evaluate real-world performance. For instance, assessing accessibility for visually impaired users or determining page load speeds demands an environment that mirrors production. Amy illustrates this with a CSS code snippet, questioning whether it can reveal unintended style bleed or performance bottlenecks without a live interface. By advocating for environments as close to production as possible, she ensures that tests yield accurate, actionable results, setting the stage for automation to eliminate manual testing inconsistencies.
Automating with GitHub Actions
The core of Amy’s approach lies in leveraging GitHub Actions to automate front-end testing within a CI/CD pipeline. She explains that GitHub Actions’ workflows, defined in YAML files, enable developers to trigger tests on specific events, such as pull requests to a production branch. Amy walks through creating a workflow with jobs like “build” and “test,” detailing steps such as checking out repository code, setting up Node.js, and installing dependencies. By using existing GitHub Actions packages, like those for checking out code and configuring Node, she simplifies the process, ensuring tests run consistently without manual effort. This automation, Amy notes, prevents code merges that fail tests, safeguarding application quality.
Deploying and Testing Live Websites
A pivotal aspect of Amy’s workflow involves deploying a live website for testing, using Netlify for its ease and deploy preview capabilities. She demonstrates a custom bash script to deploy to Netlify, addressing challenges like handling sensitive data, such as site IDs, which GitHub Actions may flag as secrets. Amy ingeniously encodes the deployment URL to bypass security restrictions, decoding it for testing with tools like Lighthouse and Playwright. These tools provide comprehensive reports on performance and UI functionality, respectively, which Amy configures to upload as artifacts, ensuring developers can review results and address issues before merging code.
Enhancing Workflows with Additional Automation
Beyond testing, Amy showcases GitHub Actions’ versatility by integrating a package that converts code comments into GitHub issues, ensuring tasks like “fix later” are tracked. This automation assigns issues to the code’s author and auto-closes them when resolved, streamlining project management. Amy also touches on other uses, such as linting, checking broken links, and generating assets like static tweet images for blog posts. These examples highlight how GitHub Actions can extend beyond testing to enhance overall development efficiency, making it a powerful tool for modern workflows.
Links:
[VoxxedDaysTicino2026] Why Hexagonal and Onion Architectures Are Answers to the Wrong Question
Lecturer
Oliver Drotbohm is a senior principal software engineer at Broadcom, formerly part of the Spring engineering team at VMware for over 15 years. He has contributed significantly to Spring Data, focusing on repository abstractions, and more recently on architectural topics like Spring Modulith. Oliver is an advocate for modular software design and has spoken extensively on domain-driven design and system architecture. Relevant links include his GitHub profile (https://github.com/odrotbohm) and X account (https://x.com/odrotbohm).
Abstract
This article delves into Oliver Drotbohm’s critique of popular separation-of-concerns architectures like hexagonal and onion models, arguing they often prioritize decoupling over cohesion, leading to suboptimal code structures. Through theoretical analysis and practical examples, it explores software design principles rooted in coupling, cohesion, and anticipated changes. The discussion highlights trade-offs, the role of tools like Spring Modulith, and advocates for functional decomposition to achieve maintainable systems.
Fundamental Principles of Software Design and Cost
Software development’s primary cost lies in modifications rather than initial creation, as most efforts involve evolving existing systems. Oliver draws from Kent Beck’s interpretation of Edward Yourdon and Larry Constantine’s work, positing that change costs are tied to coupling between code elements—classes, packages, or modules. Coupling manifests when alterations in one area necessitate changes elsewhere, with its nature depending on the change type (e.g., database migration versus UI enhancement).
To minimize costs, decoupling is essential, but it introduces its own overhead, such as additional interfaces and implementations. Oliver emphasizes cohesion as the counterbalance: intentional coupling in appropriate places to collocate elements for common changes. Effective design creates cohesive units loosely coupled to others, essentially betting on future change patterns.
Systems emerge not just from parts but their interactions, per Russell Ackoff’s philosophy. Thus, design involves defining cohesive elements, promoting entry points, and establishing connections. This foundational view critiques architectures that overlook these dynamics.
Critique of Separation-of-Concerns Architectures
Hexagonal architecture, coined by Alistair Cockburn in 2005, centers domain logic, shielding it from infrastructure via ports and adapters. Ports are neutral entry points, with adapters depending on them, inverting dependencies to isolate the core. Onion architecture, by Jeffrey Palermo in 2008, similarly layers domain, application, and infrastructure, omitting explicit ports but maintaining inward dependencies.
Oliver compares these to layered architectures from domain-driven design (DDD), where presentation, business, and persistence layers interact, with business potentially depending on infrastructure. The key difference is dependency inversion, but terminology shifts obscure similarities. Spring applications historically embodied this: controllers (presentation/adapters), services (business/ports), repositories (persistence).
Yet, these models often yield complexity, with excessive folders for adapters, contradicting their goal of clarifying business logic. Oliver argues they answer the wrong question—focusing on technical separation rather than business cohesion.
Alternative Approaches: Prioritizing Cohesion and Encapsulation
Instead of technical layers as primary packages or modules, Oliver advocates functional decomposition, grouping by business slices (e.g., order, customer). This avoids scattering related elements across layers, akin to organizing furniture by type rather than function.
In code, vertical slices encapsulate internals: controllers, services, repositories within one package, hidden by default visibility. Public interfaces expose only necessary surfaces, enhancing encapsulation over mere organization. For simple read-only scenarios, direct JDBC in controllers suffices, hidden from outsiders. Complex slices warrant internal abstractions.
This yields cohesive bases automatically decoupled, as inter-slice connections are minimal. Intrinsic complexity dictates accidental complexity—pragmatic layering per slice. Tools like Spring Modulith enforce this, using annotations for stereotypes (e.g., @AggregateRoot), integrating with IDEs to display architectural concepts beyond packages.
Trade-Offs, Tools, and Implications
Trade-offs include initial simplicity versus future adaptability: technical packages force public interfaces, risking violations, while functional ones hide more, supporting evolution. Oliver notes developers’ inclination toward technical structures stems from IDEs’ technology-centric views (e.g., source/main/java), lacking higher abstractions.
Spring Modulith and ArchUnit address this, verifying rules and visualizing modules. IDE integrations (VS Code, Eclipse) convey code via design stereotypes, reducing package reliance.
Implications favor cohesion-first design: functional decomposition aligns with change patterns, reducing scattering. It supports DDD aggregates and repositories naturally, without mandating architectures. As systems grow, this prevents tangled messes from entangled business logic, not just direct database calls.
In conclusion, reframing the question from decoupling infrastructure to fostering cohesive, change-resilient structures yields maintainable software, leveraging tools for integrity.
Links:
[AWSReInforce2025] Redefining cybersecurity for modern threats with Armis Centrix (NIS122)
Lecturer
Steve Clark serves as Director of Cloud Alliances at Armis, orchestrating partnerships that extend cyber exposure management across cloud and edge environments. His expertise centers on asset intelligence platforms that provide real-time visibility into managed, unmanaged, and IoT devices.
Abstract
The presentation positions Armis Centrix as a cloud-native platform for comprehensive asset protection, demonstrating integration with AWS services to identify, prioritize, and remediate risks across the attack surface. Through customer examples in transportation, healthcare, and aviation, it establishes proactive exposure management as essential for modern threat defense.
Asset Discovery Beyond Traditional Boundaries
Modern environments contain thousands of unmanaged devices—IoT sensors, medical equipment, building controllers—that evade conventional inventory tools. Armis Centrix discovers assets through passive traffic analysis and active querying:
Network Traffic → Behavioral Fingerprint → Device Classification
↓
Risk Scoring Engine
The platform identifies device type, manufacturer, firmware version, and operational context without requiring agents.
Risk Prioritization and Business Context
Raw asset data becomes actionable intelligence through contextual scoring:
{
"device": "GE MRI Scanner",
"vulnerabilities": ["CVE-2023-4567"],
"connectivity": "Internet-facing",
"business_unit": "Radiology",
"priority_score": 9.8
}
Integration with ServiceNow CMDB enriches discovery with ownership and criticality metadata, enabling precise remediation workflows.
Integration Patterns with AWS Services
Armis ingests VPC Flow Logs and GuardDuty findings to extend visibility:
connectors:
- aws_vpc_flow_logs
- aws_guardduty
- servicenow_cmdb
- palo_alto_firewall
EventBridge rules trigger automated responses—quarantining compromised IoT devices, creating Jira tickets, or notifying device owners.
Real-World Deployment Outcomes
Case studies demonstrate operational impact:
- Transportation Provider: Discovered 40% more assets than ServiceNow inventory; achieved regulatory compliance ahead of DoT mandates
- Healthcare System: Reduced mean time to patch critical medical devices from 90 to 14 days
- Airport Authority: Identified rogue Wi-Fi access points and unauthorized Bluetooth beacons
These organizations leverage Armis within AWS environments, processing petabytes of traffic data with sub-second query response.
Proactive Exposure Management Framework
The platform implements continuous assessment:
- Discovery: Passive and active techniques
- Classification: ML-based device fingerprinting
- Risk Scoring: CVSS + business context
- Remediation: Automated playbooks and orchestration
- Verification: Continuous validation of control efficacy
This cycle operates 24/7, adapting to asset churn and emerging threats.
Conclusion: Comprehensive Asset Protection
Armis Centrix transforms asset visibility from periodic audits into real-time intelligence. By combining passive discovery, behavioral analysis, and AWS integration, organizations gain comprehensive protection across IT, OT, and IoT environments. The platform enables security teams to move from reactive incident response to proactive risk elimination.