Archive for the ‘General’ Category
[DotJs2025] Durable Executions for Mortals
Backend’s bedrock—state’s stewardship, asynchrony’s aegis—once consigned coders to queues’ quagmires, yet React’s reactive rite reimagines this realm. Charly Poly, developer marketer at Inngest, advocated durable executions at dotJS 2025, transmuting frontend’s fluency into fault-tolerant flows. A frontend aficionado attuned to async’s arcana, Charly posited workflows as web’s warp: events’ echoes, states’ sagas—sans system’s scutwork.
Charly’s chronicle commenced with React’s renaissance: beyond templates’ tapestry, a triad taming temporality—events’ ingress, data’s domicile, UI’s unison. Backend’s ballad parallels: requests’ reception, persistence’s peril, orchestration’s odyssey. Inngest’s insight: functions as filaments, durable by decree—stepwise sagas, state salved, failures finessed. TypeScript’s temperance: inngest.createFunction({steps: ['ship', 'email']}), waits weaving webhooks—shipment’s vigil, seven-day sentinel.
This tapestry tempers toil: throttling’s thrum, rate’s restraint—web’s whims writ large. Charly contrasted: Temporal’s toils versus Inngest’s intimacy—events’ essence, JS’s jocularity. AI’s affinity: RAG’s relays, agents’ arcs—workflows as warp and weft.
Durable’s dividend: devs’ deliverance—frontend’s flair fortifying backends, sans queues’ quandary.
React’s Reactive Roots
Charly canvassed React’s remit: events’ embrace, fetches’ flux, states’ serenity—templating’s triumph. Backend’s burden: ingress’ influx, persistence’s pang—orchestration’s odyssey.
Inngest’s Immutable Flows
Functions’ filaments: steps’ sequence, waits’ watch—webhooks’ whisper, shipment’s sojourn. TypeScript’s tether: throttling’s tie, AI’s arc—RAG’s relay, agents’ agency.
Links:
[SpringIO2025] Real-World AI Patterns with Spring AI and Vaadin by Marcus Hellberg / Thomas Vitale
Lecturer
Marcus Hellberg is the Vice President of AI Research at Vaadin, a company specializing in tools for Java developers to build web applications. As a Java Champion with nearly 20 years of experience in Java and web development, he focuses on integrating AI capabilities into Java ecosystems. Thomas Vitale is a software engineer at Systematic, a Danish software company, with expertise in cloud-native solutions, Java, and AI. He is the author of “Cloud Native Spring in Action” and an upcoming book on developer experience on Kubernetes, and serves as a CNCF Ambassador.
- Marcus Hellberg on LinkedIn
- Marcus Hellberg on GitHub
- Thomas Vitale on LinkedIn
- Thomas Vitale on GitHub
Abstract
This article examines practical patterns for incorporating artificial intelligence into Java applications using Spring AI and Vaadin, transitioning from experimental to production-ready implementations. It analyzes techniques for memory management, guardrails, multimodality, retrieval-augmented generation, tool calling, and agents, with implications for security, user experience, and system integration. Insights emphasize robust, observable AI workflows in on-premises or cloud environments.
Memory Management and Streaming in AI Interactions
Integrating large language models (LLMs) into applications requires addressing their stateless nature, where each interaction lacks inherent context from prior exchanges. Spring AI provides advisors—interceptor-like mechanisms—to augment prompts with conversation history, enabling short-term memory. For instance, a MessageChatMemoryAdvisor retains the last N messages, ensuring continuity without manual tracking.
This pattern enhances user interactions in chat-based interfaces, built here with Vaadin’s component model for server-side Java UIs. A vertical layout hosts message lists and inputs, injecting a ChatClientBuilder to construct clients with advisors. Basic interactions involve prompting the model and appending responses, but for realism, streaming via reactive fluxes improves responsiveness, subscribing to token streams and updating UI progressively.
Code illustration:
ChatClient chatClient = builder.build();
messageInput.addSubmitListener(submitEvent -> {
String message = submitEvent.getMessage();
MessageItem userItem = messageList.addMessage("You", message);
chatClient.stream(new Prompt(message))
.subscribe(response -> {
userItem.append(response.getResult().getOutput().getContent());
});
});
Streaming suits verbose responses, reducing perceived latency, while observability integrations (e.g., OpenTelemetry) trace interactions for debugging nondeterministic behaviors.
Guardrails for Security and Validation
AI workflows must mitigate risks like sensitive data leaks or invalid outputs. Input guardrails intercept prompts, using on-premises models to check for compliance with policies, blocking unauthorized queries (e.g., personal information). Output guardrails validate responses, reprompting for corrections if deserialization fails.
Advisors enable this: a default advisor with a local chat model filters inputs/outputs. For example, querying an address might be blocked if flagged, preventing cloud exposure. This ensures determinism in structured outputs, converting unstructured text to Java objects via JSON instructions.
Implications include privacy preservation in regulated sectors and integration with Spring Security for role-based tool access.
Multimodality and Retrieval-Augmented Generation
LLMs extend beyond text through multimodality, processing images, audio, or videos. Spring AI’s entity methods augment prompts for structured extraction, e.g., parsing attendee details from images into tables for programmatic use.
Retrieval-augmented generation (RAG) combats hallucinations by embedding external data as vectors in stores like PostgreSQL. A RetrievalAugmentationAdvisor retrieves relevant documents via similarity search, augmenting prompts. Customizations allow empty contexts for fallback to model knowledge.
Example:
VectorStore vectorStore = // PostgreSQL vector store
DocumentRetriever retriever = new VectorStoreDocumentRetriever(vectorStore);
RetrievalAugmentationAdvisor advisor = RetrievalAugmentationAdvisor.builder()
.documentRetriever(retriever)
.queryAugmentor(QueryAugmentor.contextual().allowEmptyContext(true))
.build();
This pattern grounds responses in proprietary data, with thresholds controlling retrieval scope.
Tool Calling, Agents, and Dynamic Integrations
Tool calling empowers LLMs as agents, invoking external functions for tasks like database queries. Annotations describe tools, passed to clients for dynamic selection. For products, a service might expose query/update methods:
@Tool(description = "Fetch products from database")
public List<Product> getProducts(@P(description = "Category filter") String category) {
// Database query
}
Agents orchestrate tools, potentially via Model Context Protocol for external services. Demonstrations include theme generation from screenshots, editing CSS via file system tools, highlighting nondeterminism and the need for safeguards.
In conclusion, these patterns enable production AI, emphasizing modularity, security, and observability for robust Java applications.
Links:
[DevoxxBE2025] Live Coding The Hive: Building a Microservices-Ready Modular Monolith
Lecturer
Thomas Pierrain is Vice President of Engineering at Agicap, a financial management platform, where he applies domain-driven design to build scalable systems. Julien Topcu is Vice President of Technology at SHODO Group, a consultancy focused on socio-technical coaching and architecture, with expertise in helping teams implement domain-driven practices.
Abstract
This analysis investigates the Hive pattern, an architectural approach for creating modular monoliths that support easy evolution to microservices. It identifies key ideas like vertical slicing and port-adapter boundaries, set against the backdrop of microservices pitfalls. Highlighting a live-refactored time-travel system, it details methods for domain alignment, encapsulation, and simulated distributed communication. Consequences for system flexibility, debt management, and scalability are evaluated, providing insights into resilient designs for existing and new developments.
Emergence from Microservices Challenges
Over a decade, the shift to microservices has often resulted in distributed messes, worse than the monoliths they replaced due to added complexity in coordination and deployment. The modular monolith concept arises as a remedy, but risks tight coupling if not properly segmented. The Hive addresses this by separating design from deployment, following “construct once, deploy flexibly.”
In the live example, a time-machine’s control system—handling energy, navigation, and diagnostics—crashes due to fragility, landing in the 1980s. Diagnostics reveal a muddled structure with high resource use, mirroring legacy systems burdened by modeling debt—the buildup of imprecise domain models hindering change.
The pattern’s innovation lies in fractal composability: modules as hexagons can nest or extract as services. This enables scaling in (sub-modules) or out (microservices), adapting to needs like independent deployment for high-load components.
Essential Tenets of the Hive
Vertical slicing packs modules with all necessities—logic, storage, interfaces—for self-sufficiency, avoiding shared layers’ dependencies. In the demo, the energy module includes its database, isolating it from navigation.
Port-adapter encapsulation defines interaction points: inbound for incoming, outbound for outgoing. Adapters translate, eliminating direct links. The navigation’s energy request port uses an adapter to call the energy’s provision port, preventing tangles.
Inter-module talks mimic microservices sans networks, using in-process events. This readies for distribution: swapping adapters for remote calls extracts modules seamlessly. The example routes via a bus, allowing monolith operation with distributed readiness.
These tenets create a supple framework, resilient to evolution. The fractal aspect allows infinite composition, as shown by nesting diagnostics within navigation.
Refactoring Methodology and Practical Steps
The session starts with a monolithic system showing instability: overused resources cause anomalies. AI schemas expose entanglements, guiding domain identification—energy, time circuits, AI.
Modules reorganize: each hexagon sliced vertically with dedicated storage. Code moves via IDE tools, databases split to prevent sharing. Energy gains PostgreSQL, queried through adapters.
Communication restructures: ports define contracts, adapters implement. Navigation’s outbound energy port adapts to energy’s inbound, using events for asynchrony.
Extraction demonstrates: energy becomes a microservice by changing adapters to network-based, deploying separately without core changes. Tests modularize similarly, using mocks for isolation.
This step-by-step approach handles brownfields incrementally, using tools for safe restructuring.
Resilience, Scalability, and Debt Mitigation
Hive’s boundaries enhance resilience: changes localize, as energy tweaks affect only its hexagon. This curbs debt, allowing independent domain refinement.
Scalability is fractal: inward nesting subdivides, outward extraction distributes. Networkless talks ease transitions, minimizing rewrites.
Versus monoliths’ coupling or microservices’ prematurity, Hive balances, domain-focused for “right-sized” architectures. Challenges: upfront refactoring, boundary discipline.
Development Ramifications and Adoption
Hive promotes adaptive designs for changing businesses. Starting modular prevents debt in new projects; modernizes legacies via paths shown.
Wider effects: better sustainment, lower costs through contained modules. As hype fades, Hive provides hybrids, emphasizing appropriate sizing.
Future: broader use in frameworks, tools for pattern enforcement.
In overview, Hive exemplifies composable resilience, merging monolith unity with microservices adaptability.
Links:
- Lecture video: https://www.youtube.com/watch?v=VKcRNtj0tzc
- Thomas Pierrain on LinkedIn: https://fr.linkedin.com/in/thomas-p-0664769
- Thomas Pierrain on Twitter/X: https://twitter.com/tpierrain
- Julien Topcu on LinkedIn: https://fr.linkedin.com/in/julien-top%25C3%25A7u
- Agicap website: https://agicap.com/
- SHODO Group website: https://shodo.io/
From JMS and Message Queues to Kafka Streams: Why Kafka Had to Be Invented
For decades, enterprise systems relied on message queues and JMS-based brokers to decouple applications and ensure reliable communication. Technologies such as IBM MQ, ActiveMQ, and later RabbitMQ solved an important problem: how to move messages safely from one system to another without tight coupling.
However, as systems grew larger, more distributed, and more data-driven, the limitations of this model became increasingly apparent. Kafka — and later Kafka Streams — did not emerge because JMS and MQ were poorly designed. They emerged because they were designed for a different era and a different class of problems.
What JMS and MQ Were Designed to Do
Traditional message brokers focus on delivery. A producer sends a message, the broker stores it temporarily, and a consumer receives it. Once the message is acknowledged, it is typically removed. The broker’s primary responsibility is to guarantee that messages are delivered reliably and, in some cases, transactionally.
This model works very well for command-style interactions such as order submission, workflow orchestration, and request-driven integration between systems. Messages are transient by design, consumers are expected to be online, and the system’s success is measured by how quickly and reliably messages move through it.
For many years, this was sufficient.
The Problems That Started to Appear
As companies began operating at internet scale, the assumptions underlying JMS and MQ started to break down. Data volumes increased dramatically, and systems needed to handle not thousands, but millions of events per second. Message brokers that tracked delivery state per consumer became bottlenecks, both technically and operationally.
More importantly, the nature of the data changed. Events were no longer just instructions to be executed and discarded. They became facts: user actions, transactions, logs, metrics, and behavioral signals that needed to be stored, analyzed, and revisited.
With JMS and MQ, once a message was consumed, it was gone. Reprocessing required complex duplication strategies or external storage. Adding a new consumer meant replaying data manually, if it was even possible. The broker was optimized for delivery, not for history.
At the same time, architectures became more decoupled. Multiple teams wanted to consume the same data independently, at their own pace, and for different purposes. In a traditional queue-based system, this required copying messages or creating parallel queues, increasing cost and complexity.
These pressures revealed a fundamental mismatch between what message queues were built for and what modern systems required.
The Conceptual Shift That Led to Kafka
Kafka was created to answer a different question. Instead of asking how to deliver messages efficiently, its designers asked how to store events reliably at scale and allow many consumers to read them independently.
The key idea was deceptively simple: treat data as an append-only log. Producers write events to a log, and consumers read from that log at their own pace. Events are not deleted when consumed. They are retained for a configurable period, or even indefinitely.
In this model, the broker no longer tracks who consumed what. Each consumer keeps track of its own position. This small change eliminates a major scalability bottleneck and makes replay a natural operation rather than an exceptional one.
Kafka’s architecture reflects this shift. It is disk-first rather than memory-first, optimized for sequential writes and reads. It scales horizontally through partitioning. It treats durability and throughput as complementary goals rather than trade-offs.
Kafka was not created to replace message queues; it was created to solve problems message queues were never meant to solve.
From Transport to Platform: Why Kafka Streams Exists
Kafka alone provides storage and distribution of events, but it does not process them. Early Kafka users still needed external systems to transform, aggregate, and analyze data flowing through Kafka.
Kafka Streams was created to close this gap.
Instead of introducing another centralized processing cluster, Kafka Streams embeds stream processing directly into applications. This is a deliberate contrast with both JMS consumers and large external processing frameworks.
In a JMS-based system, consumers typically process messages one at a time, often statelessly, and rely on external databases for aggregation and state. Rebuilding state after a failure is complex and error-prone.
Kafka Streams, by contrast, assumes that stateful processing is normal. It provides abstractions for event streams and for state that evolves over time. It stores state locally for performance and backs it up to Kafka so it can be restored automatically. Processing logic, state, and data history are all aligned around the same event log.
This approach turns Kafka from a passive transport layer into an active data platform.
What Kafka and Kafka Streams Do Differently
The fundamental difference between JMS/MQ and Kafka is not syntax or APIs, but philosophy.
Message queues focus on messages as transient instructions. Kafka focuses on events as durable facts. Message queues optimize for delivery guarantees. Kafka optimizes for scalability, retention, and replay. Message queues treat consumers as part of the broker’s responsibility. Kafka treats consumers as independent actors.
Kafka Streams builds on this by assuming that computation belongs close to the data. Instead of shipping data to a processing engine, it ships processing logic to where the data already is. This inversion dramatically simplifies architectures while increasing reliability.
Why Someone “Woke Up and Created Kafka”
Kafka was born out of necessity. At companies like LinkedIn, existing messaging systems could not handle the volume, variety, and longevity of data they were producing. They needed a system that could ingest everything, store it reliably, and make it available to many consumers without coordination.
Kafka Streams followed naturally. Once data became durable and replayable, processing it in a stateless, fire-and-forget manner was no longer sufficient. Systems needed to compute continuously, maintain state, and recover automatically — all while remaining simple to operate.
Kafka and Kafka Streams are the result of rethinking messaging from first principles, in response to scale, data-driven architectures, and the need to treat events as first-class citizens.
Conclusion
JMS and traditional message queues remain excellent tools for command-based integration and transactional workflows. Kafka was not designed to replace them, but to address a different category of problems.
Kafka introduced the idea of a distributed, durable event log as the backbone of modern systems. Kafka Streams extended that idea by embedding real-time pro
[DotJs2024] Encrypt All Transports
In the shadowed corridors of digital discourse, where data streams pulse like vital arteries, lurks the imperative to cloak communications in unbreakable veils. Eleanor McHugh, a freelance reality consultant and anonymity architect with three decades spanning avionics to blockchain, issued this mandate at dotJS 2024. Ellie, co-founder of Innovative Identity Solutions, decried surveillance’s specter—from Lenovo’s BIOS interlopers to AI’s voracious scans—positing developers as privacy’s vanguard. Her whirlwind primer: wield WebSockets, RSA, AES in Node and browser crucibles, forging transports impervious to prying eyes.
Ellie’s ire ignited with 2015’s scandals: adware proxies hijacking HTTPS, unmasking “secure” flows for monetization. Today’s AI fervor—Facebook, Microsoft, Apple coveting content—echoes, demanding defiance. Privacy’s etymology—privity’s pact, NDA’s shroud—binds us; yet CTOs crave visibility, debugging APIs dissecting deeds at dawn’s witching hour. Ellie indicted: we, the coders, perpetuate panopticons, outsourcing souls to Albanian bunkers or quakesafe vaults. Reclamation resides in crypto’s toolkit: symmetric ciphers scrambling payloads, asymmetric duos authenticating origins, signatures vouching veracity, zero-knowledge veiling proofs.
Ellie’s arsenal gleams in GitHub’s forge: WebSockets for bidirectional brooks, RSA’s key pairs partitioning public probes from private vaults, AES randomizing streams into gibberish. Node’s crypto module, browser’s SubtleCrypto—both tame these titans. A vignette: socket spawns, keys exchanged via Diffie-Hellman ephemera, payloads AES-encrypted, RSA-signed—interception yields noise, replay thwarted by nonces. Zero-knowledge crowns: prove solvency sans balances, age sans birthdates—zk-SNARKs succinct, verifiable.
Ellie’s entreaty: tinker this trove, erect enclosures where client secrets elude server spies. As liveness biometrics and encrypted enclaves evolve, her free chapter beckons—crypto sans cost, privacy paramount. In software’s void, we architect anonymity; shirk not this solemnity.
Crypto Primitives in Play
Ellie enumerated: AES symmetrizes speed, RSA asymmetrizes trust—public encrypts, private decrypts. Signatures seal integrity; zk proofs affirm attributes incognito. WebSockets underpin, channels churning ciphered chatter—Node’s forge, browser’s bastion.
Defending Against Digital Dragnets
From BIOS betrayals to AI appetites, Ellie’s exposé exhorted: encrypt endpoints, anonymize identities. Her slides loop—3:30 eternities—urging uptake: GitHub’s gallery, SlideShare’s scrolls. Consultations await; privacy’s perimeter, we patrol.
Links:
[GoogleIO2024] What’s New in Flutter: Cross-Platform Innovations and Performance Boosts
Flutter’s pillars—portability, performance, and openness—drive its evolution. Kevin Moore and John Ryan highlighted five key updates, from AI integrations to web assembly support, empowering developers to create seamless experiences across devices.
Portability Across Platforms with Gemini API
Kevin stressed Flutter’s code-sharing efficiency, achieving 97% reuse in Google’s apps. The Gemini API integration via Google AI Dart SDK enables generative features, like image-to-text in apps such as Bricket, which identifies Lego bricks for model suggestions.
Global Gamers Challenge with Global Citizen showcased Flutter’s gaming potential, with winners like “Save the Lot” addressing environmental issues. Resources for game development, including Casual Games Toolkit, facilitate cross-platform builds.
Performance Enhancements with Impeller and Macros
John introduced Impeller on Android, Flutter’s rendering engine, reducing jank through precompiled shaders. Benchmarks show up to 50% frame time improvements, enhancing experiences on mid-range devices.
Dart macros, in experimental preview, automate boilerplate code for tasks like JSON serialization, boosting developer productivity without runtime overhead.
Web Optimization Through Web Assembly
Web Assembly compilation in Flutter 3.22 doubles performance, with up to 4x gains in demanding frames. This consistency minimizes jank, enabling richer web apps.
Collaborations with browser teams ensure broad compatibility, aligning with Flutter’s open ethos.
These 2024 updates solidify Flutter’s role in efficient, high-performance app development.
Links:
[DevoxxGR2025] Understanding Flow in Software Development
James Lewis, a ThoughtWorks consultant, delivered a 41-minute talk at Devoxx Greece 2025, exploring how work flows through software development, drawing on information theory and complexity science.
The Nature of Work as Information
Lewis framed software development as transforming “stuff” into more valuable outputs, akin to enterprise workflows before computers. Work, invisible as information, flows through value streams—from ideas to production code. However, invisibility causes issues like unnoticed backlogs or undeployed code, acting as costly inventory. Lewis cited Don Reinertsen’s Principles of Product Development Flow, emphasizing that untested or undeployed code represents lost revenue, unlike visible factory inventory, which signals inefficiencies immediately.
Visualizing Value Streams
Using a value stream map, Lewis illustrated a typical development cycle: three days for coding, ten days waiting for testing, and 30 days for deployment, totaling 47 days of lead time, with 42 days as idle inventory. Wait times stem from coordination (teams waiting on others), scheduling (e.g., architecture reviews), and queues (backlogs). Shared test environments exacerbate delays, costing more than provisioning new ones. Lewis advocated mapping workflows to expose economic losses, making a case for faster delivery to stakeholders.
Reducing Batch Sizes for Flow
Lewis emphasized reducing batch sizes to improve flow, a principle rooted in queuing theory. Smaller batches, like deploying twice as often, halve wait times, enabling faster revenue generation. Using agent-based models, he simulated agile (single-piece flow) versus waterfall (100% batch) teams, showing agile teams deliver value faster. Limiting work-in-progress and controlling queue sizes prevent congestion collapse, ensuring smoother, more predictable workflows.
Links
[AWSReInforce2025] Secure and scalable customer IAM with Cognito: Wiz’s success story (IAM221)
Lecturer
Rahul Sharma serves as Principal Product Manager for Amazon Cognito at AWS, driving the roadmap for customer identity and access management (CIAM) at global scale. Alex Vorte functions as Field CTO for Login and RBAC at Wiz, leading identity transformation initiatives that support FedRAMP authorization and enterprise compliance.
Abstract
The case study examines Wiz’s migration of 100,000+ identities to Amazon Cognito, achieving FedRAMP High authorization, 99.9% availability, and 70% cost reduction. It establishes best practices for CIAM modernization—migration strategies, machine identity integration, and SLA alignment—that balance security, scalability, and user experience.
Migration Strategy and Execution Framework
Wiz executed a phased migration across three cohorts:
- Pilot (0-10% users): Parallel authentication flows
- Canary (10-50%): Gradual traffic shift with feature flags
- Cutover (50-100%): Automated bulk migration
\# Bulk migration pseudocode
for user in legacy_db.batch(1000):
cognito.admin_create_user(
Username=user.email,
TemporaryPassword=generate_secure_temp(),
UserAttributes=user.profile
)
trigger_password_reset_email(user)
The platform processed 100,000 identities in under one year, with zero downtime during cutover.
Security and Compliance Architecture
FedRAMP High requirements drove design decisions:
- Encryption: KMS customer-managed keys for data at rest
- Network: VPC-private user pools with AWS PrivateLink
- Audit: CloudTrail integration for all admin actions
- MFA: Mandatory WebAuthn with hardware key support
Cognito’s built-in compliance (SOC, PCI, ISO) eliminated third-party audit burden.
Scalability and Availability Engineering
Architecture supports 10,000 RPS authentication:
Global Accelerator → CloudFront → Cognito (multi-AZ)
↓
Lambda@Edge for custom auth
SLA achievement:
– RTO: < 4 hours via cross-region replication
– RPO: < 1 minute with continuous backups
– Availability: 99.9% through health checks and auto-scaling
Machine Identity Integration
Beyond human users, Cognito manages:
- Service accounts: OAuth2 client credentials flow
- CI/CD pipelines: Federated tokens via OIDC
- IoT devices: Custom authenticator with X.509 certificates
// CI/CD token acquisition
CognitoIdentityProvider client = ...
InitiateAuthRequest request = new InitiateAuthRequest()
.withAuthFlow(AuthFlowType.CLIENT_CREDENTIALS)
.withClientId(PIPELINE_CLIENT_ID);
This unified approach reduced identity sprawl by 60%.
Cost Optimization Outcomes
Migration yielded 70% reduction through:
- Elimination of legacy IdP licensing
- Pay-per-monthly-active-user pricing
- Removal of custom auth infrastructure
- Automated user lifecycle management
Best Practices for CIAM Modernization
- Choose migration strategy by risk tolerance: parallel runs for zero-downtime
- Leverage Cognito migration APIs: bulk import with password hash preservation
- Implement progressive enhancement: start with email/password, add MFA/social later
- Align with product roadmap: design partner relationship for feature priority
Conclusion: CIAM as Strategic Enabler
Wiz’s transformation demonstrates that modern CIAM need not compromise between security, scale, and cost. Amazon Cognito provides the managed substrate that absorbs authentication complexity, enabling security teams to focus on policy and governance rather than infrastructure. The migration framework—phased execution, machine identity integration, and SLA engineering—offers a repeatable pattern for enterprises undergoing digital transformation.
Links:
[DevoxxUK2025] The Art of Structuring Real-Time Data Streams into Actionable Insights
At DevoxxUK2025, Olena Kutsenko, a data streaming expert from Confluent, delivered a compelling session on transforming chaotic real-time data streams into structured, actionable insights using Apache Kafka, Apache Flink, and Apache Iceberg. Through practical demos involving IoT devices and social media data, Olena demonstrated how to build scalable, low-latency data pipelines that ensure high data quality and flexibility for downstream analytics and AI applications. Her talk highlighted the power of combining these open-source technologies to handle messy, high-volume data streams, making them accessible for querying, visualization, and decision-making.
Apache Kafka: The Scalable Message Bus
Olena introduced Apache Kafka as the foundation for handling high-speed data streams, acting as a scalable message bus that decouples data producers (e.g., IoT devices) from consumers. Kafka’s design, with topics and partitions likened to multi-lane roads, ensures high throughput and low latency. In her IoT demo, Olena used a JavaScript producer to ingest sensor data (temperature, battery levels) into a Kafka topic, handling messy data with duplicates or missing sensor IDs. Kafka’s ability to replicate data and retain it for a defined period ensures reliability, allowing reprocessing if needed, making it ideal for industries like banking and retail, such as REWE’s use of Kafka for processing sold items.
Apache Flink: Real-Time Data Processing
Apache Flink was showcased as the engine for cleaning and structuring Kafka streams in real time. Olena explained Flink’s ability to handle both unbounded (real-time) and bounded (historical) data, using SQL for transformations. In the IoT demo, she applied a row_number function to deduplicate records by sensor ID and timestamp, filtered out invalid data (e.g., null sensor IDs), and reformatted timestamps to include time zones. A 5-second watermark ignored late-arriving data, and a tumbling window aggregated data into one-minute buckets, enriched with averages and standard deviations, ensuring clean, structured data ready for analysis.
Apache Iceberg: Structured Storage for Analytics
Olena introduced Apache Iceberg as an open table format that brings data warehouse-like structure to data lakes. Developed at Netflix to address Apache Hive’s limitations, Iceberg ensures atomic transactions and schema evolution without rewriting data. Its metadata layer, including manifest files and snapshots, supports time travel and efficient querying. In the demo, Flink’s processed data was written to Iceberg-compatible Kafka topics using Confluent’s Quora engine, eliminating extra migrations. Iceberg’s structure enabled fast queries and versioning, critical for analytics and compliance in regulated environments.
Querying and Visualization with Trino and Superset
To make data actionable, Olena used Trino, a distributed query engine, to run fast queries on Iceberg tables, and Apache Superset for visualization. In the IoT demo, Superset visualized temperature and humidity distributions, highlighting outliers. In a playful social media demo using Blue Sky data, Olena enriched posts with sentiment analysis (positive, negative, neutral) and category classification via a GPT-3.5 Turbo model, integrated via Flink. Superset dashboards displayed author activity and sentiment distributions, demonstrating how structured data enables intuitive insights for non-technical users.
Ensuring Data Integrity and Scalability
Addressing audience questions, Olena explained Flink’s exactly-once processing guarantee, using watermarks and snapshots to ensure data integrity, even during failures. Kafka’s retention policies allow reprocessing, critical for regulatory compliance, though she noted custom solutions are often needed for audit evidence in financial sectors. Flink’s parallel processing scales effectively with Kafka’s partitioned topics, handling high-volume data without bottlenecks, making the pipeline robust for dynamic workloads like IoT or fraud detection in banking.
Links:
[OxidizeConf2024] Building Cross-Platform GUIs with Slint – A Practical Introduction
Introducing Slint’s Versatility
Creating intuitive, cross-platform graphical user interfaces (GUIs) is a critical challenge in modern software development. At OxidizeConf2024, Olivier Goffart, co-founder of Slint, introduced this Rust-based GUI framework designed for desktop, embedded, and bare-metal MCU applications. With a background in Qt and KDE, Olivier demonstrated Slint’s capabilities through a live coding session, showcasing its ability to craft native applications with minimal platform-specific adjustments.
Slint combines a declarative markup language with Rust’s imperative logic, offering a balance of expressiveness and performance. Olivier highlighted its support for desktop, mobile, and web platforms via WebAssembly, though the web is secondary to native targets. His demo illustrated the creation of a simple button with dynamic styling, leveraging Slint’s markup to define layouts and Rust for logic, making it accessible for developers accustomed to imperative programming.
Live Coding a Responsive UI
Olivier’s live coding session was a highlight, demonstrating Slint’s ease of use. He built a button with a gray background, padding, and centered alignment, using Slint’s markup to define the UI. By adding a touch area and binding it to a click event, he enabled dynamic color changes—red when pressed, gray otherwise—with a 300ms animation for smooth transitions. Border radius and width further enhanced the button’s aesthetics, showcasing Slint’s flexibility in meeting designer specifications.
The demo underscored Slint’s portability. Olivier noted that the same code, with minor adaptations, can run on bare-metal MCUs using tools like probe-rs. This portability, enabled by Rust’s ecosystem, allows developers to target diverse platforms without extensive rewrites. Slint’s integration with cargo ensures seamless compilation, making it an efficient choice for embedded and desktop applications alike.
Streamlining Development with Slint
Slint’s design prioritizes developer productivity and application performance. Olivier emphasized its lightweight nature, suitable for resource-constrained environments like MCUs. The framework’s ability to handle complex layouts with minimal code reduces development time, while Rust’s safety guarantees prevent common UI bugs. For embedded systems, Slint’s compatibility with Rust’s ecosystem tools like cargo and probe-rs simplifies deployment, as demonstrated by Olivier’s assurance that the demo code could run on an MCU with minor tweaks.
By open-sourcing Slint, Olivier and his team encourage community contributions, fostering a growing ecosystem. His invitation to visit the demo booth reflects Slint’s collaborative spirit, aiming to refine the framework through developer feedback. Slint’s practical approach to cross-platform GUI development positions it as a powerful tool for Rust developers, streamlining the creation of responsive, reliable applications.