Recent Posts
Archives

Posts Tagged ‘VoxxedDaysTicino2026’

PostHeaderIcon [VoxxedDaysTicino2026] The Past, Present, and Future of Programming Languages

Lecturer

Kevlin Henney is an independent consultant, trainer, and author specializing in software architecture, programming paradigms, and agile practices. He has contributed to numerous books, including “97 Things Every Programmer Should Know,” and is a frequent speaker at international conferences. Kevlin’s work spans decades, influencing developers through his insights on language evolution and design patterns. Relevant links include his X account (https://x.com/kevlinhenney) and Mastodon (https://mastodon.social/@kevlinhenney).

Abstract

This article analyzes Kevlin Henney’s exploration of programming languages’ historical trajectory, current state, and prospective developments. It dissects paradigms, influences, and biases shaping language adoption, emphasizing slow evolution despite rapid technological hype. Through data-driven analysis and historical anecdotes, it underscores the dominance of 20th-century languages, the assimilation of functional features into mainstream ones, and AI’s reinforcing role, offering implications for future trends.

Historical Foundations and Paradigm Shifts

Programming languages bridge hardware and human cognition, embodying philosophies for structuring thoughts and systems. Kevlin traces their origins to the 1950s, with Fortran as an experimental compiler challenging beliefs that high-level languages couldn’t match assembly efficiency. John Backus’s team at IBM proved otherwise, unleashing a “virus” that normalized compilation.

By 1977, Backus questioned liberation from the “von Neumann style”—imperative models mimicking memory storage, jumps, and assignments. He advocated functional styles with program algebras, introducing “style” before Robert Floyd’s 1978 formalization of paradigms. Paradigms, borrowed from other disciplines, frame programming approaches: imperative, functional, logic.

Historical influences abound; Algol 68, despite limited adoption, pioneered constructs like if-then-else as expressions, impacting modern syntax. Kevlin highlights languages’ slow pace: mainstream ones still integrate decades-old ideas, with developers embracing “new” features older than themselves.

This context reveals languages as ecosystems defining skills, communities, and loyalties, evolving gradually amid technological progress.

Current Landscape: Dominance and Biases

Contemporary rankings like TIOBE and RedMonk illustrate stasis. TIOBE’s January 2026 top 10 features Python leading, followed by C, Java, C++, and others—all 20th-century except Go. Skewed distributions show Python’s dominance, with top-five accounting for nearly 60% of activity.

RedMonk, biased toward Stack Overflow and GitHub, elevates TypeScript but confirms 20th-century prevalence. Even gRPC-supported languages skew vintage. Kevlin notes human statistical misconceptions: top-10 lists appear linear, but power laws dominate, amplifying incumbents.

Biases perpetuate this: legacy code bases influence employment and evolution, with languages borrowing features (e.g., lambdas from 1930s lambda calculus) to retain users. Java’s 2014 lambdas postdate C++’s; JavaScript popularized them, but Lisp implemented in 1960.

Paradigms blend: few pure functional languages in top-20; most hybridize, raiding functional concepts (lambdas, map-reduce) without full adoption. SQL, a declarative logic language, exemplifies non-functional declarativeness, rewritten as comprehensions in Python or Haskell.

Excel, per Simon Peyton Jones, is the most popular functional language, with 2020 lambdas (now in Google Sheets) adding calculus. This assimilation dilutes paradigms; functional programming peaked a decade ago, its ideas mainstreamed.

AI’s Influence on Language Evolution

Artificial intelligence reinforces biases. Early Lisp dominance in symbolic AI gave way to neural networks and machine learning in the 1980s-1990s. Modern LLMs, statistical at core, excel in languages with abundant data: JavaScript, Python, TypeScript.

Anders Hejlsberg observes AI’s proficiency proportional to exposure, disadvantaging new languages. LLMs default to mainstream, using Python for tasks like counting ‘R’s in “strawberry”—orchestrating code where reasoning falters.

Implications: AI makes languages “irrelevant” yet crucial, as defaults bias toward past dominants. Orchestration (e.g., Gemini writing Python) joins developers’ statistical set, perpetuating incumbents.

Future Trajectories and Constraints

Future predictions defy certainty, but trends suggest continuity. Change lags expectations; quantum computing remains niche, irrelevant to mainstream for decades.

Functional programming won’t dominate; von Neumann imperatives persist. AI amplifies long tails—easier language creation—but cores stabilize. Notations could innovate, per Richard Feynman, but comfort favors sharing existing ones.

William Faulkner’s quote—”The past is never dead. It’s not even past”—encapsulates: legacies endure, shaped by data, communities, and AI.

In conclusion, languages evolve slowly, assimilating ideas while incumbents dominate, with AI entrenching this amid potential for niche proliferation.

Links:

PostHeaderIcon [VoxxedDaysTicino2026] Why Hexagonal and Onion Architectures Are Answers to the Wrong Question

Lecturer

Oliver Drotbohm is a senior principal software engineer at Broadcom, formerly part of the Spring engineering team at VMware for over 15 years. He has contributed significantly to Spring Data, focusing on repository abstractions, and more recently on architectural topics like Spring Modulith. Oliver is an advocate for modular software design and has spoken extensively on domain-driven design and system architecture. Relevant links include his GitHub profile (https://github.com/odrotbohm) and X account (https://x.com/odrotbohm).

Abstract

This article delves into Oliver Drotbohm’s critique of popular separation-of-concerns architectures like hexagonal and onion models, arguing they often prioritize decoupling over cohesion, leading to suboptimal code structures. Through theoretical analysis and practical examples, it explores software design principles rooted in coupling, cohesion, and anticipated changes. The discussion highlights trade-offs, the role of tools like Spring Modulith, and advocates for functional decomposition to achieve maintainable systems.

Fundamental Principles of Software Design and Cost

Software development’s primary cost lies in modifications rather than initial creation, as most efforts involve evolving existing systems. Oliver draws from Kent Beck’s interpretation of Edward Yourdon and Larry Constantine’s work, positing that change costs are tied to coupling between code elements—classes, packages, or modules. Coupling manifests when alterations in one area necessitate changes elsewhere, with its nature depending on the change type (e.g., database migration versus UI enhancement).

To minimize costs, decoupling is essential, but it introduces its own overhead, such as additional interfaces and implementations. Oliver emphasizes cohesion as the counterbalance: intentional coupling in appropriate places to collocate elements for common changes. Effective design creates cohesive units loosely coupled to others, essentially betting on future change patterns.

Systems emerge not just from parts but their interactions, per Russell Ackoff’s philosophy. Thus, design involves defining cohesive elements, promoting entry points, and establishing connections. This foundational view critiques architectures that overlook these dynamics.

Critique of Separation-of-Concerns Architectures

Hexagonal architecture, coined by Alistair Cockburn in 2005, centers domain logic, shielding it from infrastructure via ports and adapters. Ports are neutral entry points, with adapters depending on them, inverting dependencies to isolate the core. Onion architecture, by Jeffrey Palermo in 2008, similarly layers domain, application, and infrastructure, omitting explicit ports but maintaining inward dependencies.

Oliver compares these to layered architectures from domain-driven design (DDD), where presentation, business, and persistence layers interact, with business potentially depending on infrastructure. The key difference is dependency inversion, but terminology shifts obscure similarities. Spring applications historically embodied this: controllers (presentation/adapters), services (business/ports), repositories (persistence).

Yet, these models often yield complexity, with excessive folders for adapters, contradicting their goal of clarifying business logic. Oliver argues they answer the wrong question—focusing on technical separation rather than business cohesion.

Alternative Approaches: Prioritizing Cohesion and Encapsulation

Instead of technical layers as primary packages or modules, Oliver advocates functional decomposition, grouping by business slices (e.g., order, customer). This avoids scattering related elements across layers, akin to organizing furniture by type rather than function.

In code, vertical slices encapsulate internals: controllers, services, repositories within one package, hidden by default visibility. Public interfaces expose only necessary surfaces, enhancing encapsulation over mere organization. For simple read-only scenarios, direct JDBC in controllers suffices, hidden from outsiders. Complex slices warrant internal abstractions.

This yields cohesive bases automatically decoupled, as inter-slice connections are minimal. Intrinsic complexity dictates accidental complexity—pragmatic layering per slice. Tools like Spring Modulith enforce this, using annotations for stereotypes (e.g., @AggregateRoot), integrating with IDEs to display architectural concepts beyond packages.

Trade-Offs, Tools, and Implications

Trade-offs include initial simplicity versus future adaptability: technical packages force public interfaces, risking violations, while functional ones hide more, supporting evolution. Oliver notes developers’ inclination toward technical structures stems from IDEs’ technology-centric views (e.g., source/main/java), lacking higher abstractions.

Spring Modulith and ArchUnit address this, verifying rules and visualizing modules. IDE integrations (VS Code, Eclipse) convey code via design stereotypes, reducing package reliance.

Implications favor cohesion-first design: functional decomposition aligns with change patterns, reducing scattering. It supports DDD aggregates and repositories naturally, without mandating architectures. As systems grow, this prevents tangled messes from entangled business logic, not just direct database calls.

In conclusion, reframing the question from decoupling infrastructure to fostering cohesive, change-resilient structures yields maintainable software, leveraging tools for integrity.

Links:

PostHeaderIcon [VoxxedDaysTicino2026] Backlog.md: The Simplest Project Management Tool for the AI Era

Lecturer

Alex Gavrilescu is a full-stack developer with extensive experience in .NET and Vue.js technologies. He has been actively involved in software development for many years and has shifted his focus toward artificial intelligence since last year. Alex developed Backlog.md as a side project starting from the end of May 2025, while maintaining a full-time role in the casino industry. He shares insights through blog articles on platforms like LinkedIn and X (formerly Twitter). Relevant links include his LinkedIn profile (https://www.linkedin.com/in/alex-gavrilescu/) and X account (https://x.com/alexgavrilescu).

Abstract

This article examines Alex Gavrilescu’s presentation on his journey in AI-assisted software development and the creation of Backlog.md, a terminal-based project management tool designed to enhance predictability and structure in workflows involving AI agents. Drawing from personal experiences, the discussion analyzes the evolution from unstructured prompting to a systematic approach, emphasizing task decomposition, context management, and delegation modes. It explores the tool’s features, limitations, and implications for spec-driven AI development, highlighting how such methodologies foster deterministic outcomes in non-deterministic AI environments.

Context of AI Integration in Development Workflows

In the evolving landscape of software engineering, the integration of artificial intelligence agents has transformed traditional practices. Alex begins by contextualizing his experiences, noting the shift from basic code completions in integrated development environments (IDEs) like Visual Studio’s IntelliSense, which relied on simple machine learning or pattern matching, to more advanced tools. The advent of models like ChatGPT allowed developers to query and incorporate code snippets, reducing friction but still requiring manual transfers.

The introduction of GitHub Copilot marked a significant advancement, embedding AI directly into IDEs for contextual queries and modifications. However, the true leap came with agent modes, where AI operates in a loop, utilizing tools and gathering context autonomously until task completion. Alex distinguishes between “steer mode,” where developers iteratively guide AI through prompts and approvals, and “delegate mode,” where comprehensive instructions are provided upfront for independent execution. His focus leans toward delegation, aiming for reliable outcomes without constant intervention.

This context is crucial as AI models are inherently non-deterministic, yielding varied results from identical prompts. Alex draws parallels to human collaboration, where structured information—clarifying the “why,” “what,” and “how”—ensures success. He references practices like Gherkin scenarios (given-when-then) but simplifies them to acceptance criteria and definitions of done, adapting them for AI efficiency. Early challenges, such as limited context windows in models like those from May 2025, necessitated task breakdown to avoid information loss during compaction.

The implications are profound: unstructured AI use often leads to abandonment, as complexity escalates failure rates. Alex classifies developers into categories like “vibe coders” (improvisational prompting without code review) and “AI product managers” (structured delegation with final reviews), illustrating how his journey from near-abandonment to 95% success stemmed from imposing structure.

Development and Features of Backlog.md

Backlog.md emerged as Alex’s solution to the limitations of manual task structuring. Initially, he created tasks in Markdown files, logging them in Git repositories for sharing and history. This allowed referencing between tasks, scoping to prevent derailment, and assigning tasks to specialized agents (e.g., Opus for UI, Codex for backend). By avoiding database or API dependencies, agents could directly read files, enhancing efficiency.

The tool formalizes this into a command-line interface (CLI) resembling Git commands: backlog task create, edit, list. Tasks are stored as Markdown with a front-matter section for metadata (title, ID, dependencies, status). Sections include “why” for problem context, acceptance criteria with checkboxes for self-verification, implementation plans generated by agents, and notes/summaries for pull request descriptions.

Backlog.md supports subtasks, dependencies (e.g., “relates to” or “blocked by”), and a web interface for easier editing, including rich text and dark mode. It operates offline, uses Git for synchronization across branches, and avoids conflicts by leveraging repository permissions for security. Notably, 99% of its code was AI-generated, with Alex reviewing initial tasks, demonstrating the tool’s recursive utility.

Limitations include no direct task initiation from the interface, self-hosting requirements, single-repo support, experimental documentation/decisions sections, and absent integrations like GitHub Issues or Jira. As a solo side project, it lacks production-grade support, but welcomes community contributions via issues or pull requests.

In practice, Alex showcases Backlog.md in a live demo for spec-driven development. Starting with a product requirements document (PRD) generated by an agent like Claude, tasks are decomposed. Implementation plans are reviewed per task to adapt to changes, ensuring accuracy. Sub-agents orchestrate parallel planning, with human checkpoints at description, plan, and code stages.

Methodological Implications for Spec-Driven AI Development

Spec-driven AI development, as outlined, requires clear intent expression before execution. Backlog.md facilitates this by breaking projects into manageable tasks, delegating to agents for research, planning, and coding. A feedback loop refines agent instructions, specs, and processes.

Alex’s workflow begins with PRD creation, followed by task decomposition adhering to Backlog.md guidelines. Agents generate plans only upon task start, preventing obsolescence. For a task-scheduling feature, he demonstrates PRD prompting, task creation, and sub-agent orchestration for plans, emphasizing acceptance criteria for verification.

The methodology promotes one-task-per-context-window sessions, referencing summaries to avoid bloat. Definitions of done, global across projects, enforce testing, linting, and security checks. This counters “vibe coding’s” directional uncertainty, ensuring guardrails like unit tests prevent premature completion claims.

Implications extend to project readiness: documentation for agent onboarding mirrors human processes, with skills, code styles, and self-verification loops enhancing efficiency. Alex references a Factory.ai article on AI-ready maturity levels, underscoring documentation’s role.

Challenges persist in UI verification, requiring human QA, and complex integrations. Yet, the approach allows iterations without full restarts, leveraging cheap tokens for refinements.

Consequences and Future Directions

Backlog.md’s simplicity yields repeatability, boosting success from 50% (slot-machine-like prompting) to 95%. By structuring delegation, it mitigates AI’s non-determinism, fostering predictable workflows. Consequences include democratized AI use—no prior experience needed beyond basic Git—potentially broadening adoption.

For teams, Git synchronization enables collaboration, though self-hosting limits non-technical access. Future enhancements might include multi-repo support, integrations, and improved documentation, driven by its 4,600 GitHub stars and community feedback.

Broader implications question AI’s role: accepting “good enough” results accelerates development, but human input remains vital for steering and verification. As models improve (e.g., Opus 5.6’s million-token window), tools like Backlog.md evolve, but foundational structure endures.

In conclusion, Alex’s tool and methodology exemplify pragmatic AI integration, balancing innovation with reliability in an era where agents redefine development.

Links: