Recent Posts
Archives

Archive for the ‘en-US’ Category

PostHeaderIcon [AWSReInforce2025] How AWS designs the cloud to be the most secure for your business (SEC201)

Lecturer

The presentation is delivered by AWS security engineering leaders who architect the foundational controls that underpin the global cloud infrastructure. Their expertise encompasses hardware security modules, hypervisor isolation, formal verification, and organizational separation of duties across planetary-scale systems.

Abstract

The exposition delineates AWS’s security design philosophy, demonstrating how deliberate architectural isolation, formal verification, and cultural reinforcement create a substrate that absorbs undifferentiated security burden. Through examination of Nitro System enclaves, independent control planes, and hardware-rooted attestation, it establishes that security constitutes the primary reliability pillar, enabling customers to prioritize application innovation over infrastructure protection.

Security as Cultural Imperative and Design Principle

Security permeates AWS culture as the paramount priority, manifesting in organizational structure and technical architecture. Every engineering decision undergoes security review; features ship only when security criteria are satisfied. This cultural commitment extends to compensation—security objectives weigh equally with availability and performance in promotion criteria.

The design principle of least privilege applies ubiquitously: services operate with minimal permissions, even internally. When compromise occurs, blast radius is constrained by default. This philosophy contrasts with traditional enterprises where security is bolted on; at AWS, it is the foundation upon which all else is built.

Hardware-Enforced Isolation via Nitro System

The Nitro System exemplifies security through custom silicon. Traditional servers commingle customer workloads with management firmware; Nitro segregates these domains into dedicated cards—compute, storage, networking—each with independent firmware update channels.

Customer VM → Nitro Hypervisor → Nitro Security Module → Physical CPU
          ↘ Independent Control Plane → Hardware Attestation

The Nitro Security Module (NSM) maintains cryptographic attestation of the entire software stack. Before a host accepts customer instances, NSM verifies firmware integrity against immutable measurements burned into one-time-programmable fuses. Any deviation prevents boot, eliminating persistent rootkits at the hardware layer.

Independent Control and Data Plane Separation

Control plane operations—API calls, console interactions—execute in isolated cells that never touch customer data. A misconfigured S3 bucket policy might grant public access from the data plane perspective, but the control plane maintains an independent audit stream that detects the anomaly within minutes. This separation ensures configuration drift cannot evade detection.

The demonstration illustrates a public bucket created intentionally for testing. Within 180 seconds, Amazon Macie identifies the exposure, GuardDuty generates a finding, and Security Hub triggers an automated remediation workflow via Lambda. The customer perceives no interruption, yet the risk is mitigated proactively.

Formal Verification and Provable Security

AWS employs mathematical proof for critical components. The s2n TLS library undergoes formal verification using SAW (Software Analysis Workbench), proving absence of memory safety errors in encryption pathways. Similarly, the Firecracker microVM—underpinning Lambda and Fargate—uses TLA+ specifications to verify isolation properties under concurrency.

These proofs extend to hardware: the Nitro enclave attestation protocol is verified using ProVerif, ensuring man-in-the-middle attacks are impossible even if the host OS is compromised. This rigor transforms empirical testing into mathematical certainty for security invariants.

Organizational Isolation and Compensating Controls

Beyond technical boundaries, AWS enforces organizational separation. Teams that manage customer data cannot access control plane systems, and vice versa. This dual-key approach prevents insider threats: a malicious storage engineer cannot modify billing logic.

Compensating controls provide defense in depth. Even if a service principal is compromised, VPC endpoints restrict traffic to authorized networks. Immutable infrastructure—AMI baking, Infrastructure as Code—ensures configuration drift triggers automated replacement rather than manual fixes.

Customer Outcomes and Shared Fate

The infrastructure absorbs complexity so customers need not replicate it. Organizations avoid building global DDoS mitigation, hardware security module fleets, or formal verification teams. Instead, they compose higher-order security patterns: cell-based architectures, zero-trust microsegmentation, and automated compliance evidence collection.

This shared fate model extends to innovation velocity. When AWS hardens the substrate—introducing post-quantum cryptography in KMS, or confidential computing in EC2—customers inherit these capabilities instantly across all regions. Security becomes a force multiplier rather than a drag coefficient.

Conclusion: Security as Substratum for Civilization-Scale Computing

AWS designs security not as a feature but as the invariant property of the computing substrate. Through hardware isolation, formal verification, cultural reinforcement, and independent control planes, it creates a platform where compromise is detected and contained before customer impact. This foundation liberates organizations to build transformative applications—genomic sequencing at population scale, real-time fraud detection for billions of transactions—confident that the underlying security posture is mathematically sound and operationally resilient.

Links:

PostHeaderIcon [NDCOslo2024] The History of Computer Art – Anders Norås

In the incandescent interstice of innovation and imagination, where algorithms awaken aesthetics, Anders Norås, a Norwegian designer and digital dreamer, traces the tantalizing trajectory of computer-generated creativity. From 1960s Silicon Valley’s psychedelic pixels to 2020s generative galleries, Anders animates an anthology of artistic audacity, where hackers harnessed harmonics and hobbyists honed holograms. His odyssey, opulent with optical illusions and ontological inquiries, unveils code as canvas, querying: when does datum dance into divinity?

Anders ambles from Bay Area’s beatnik bytes—LSD-laced labs birthing bitmap beauties—to 1970s fine artists’ foray into fractals. Vera Molnar’s algorithmic abstractions, Molnar’s mechanical marks, meld math with muse, manifesting minimalism’s machine-made magic.

Psychedelic Pixels: 1960s’ Subcultural Sparks

San Francisco’s hacker havens hummed with hallucinatory hacks: Ken Knowlton’s BEFLIX begat filmic fractals, A. Michael Noll’s noisy nudes nodded to neo-classics. Anders accentuates the alchemy: computers as collaborators, conjuring compositions that captivated cognoscenti.

Algorithmic Abstractions: 1970s’ Fine Art Fusion

Fine artists forayed into flux: Frieder Nake’s generative geometries, Georg Nees’s nested nests—exhibitions eclipsed elites, etching electronics into etudes. Harold Cohen’s AARON, an autonomous auteur, authored arabesques, blurring brushes and binaries.

Rebellious Renderings: 1980s’ Demoscene Dynamism

Demoscene’s defiant demos dazzled: Future Crew’s trance tunnels, Razor 1911’s ray-traced reveries—amateurs authored epics on 8-bits, echoing graffiti’s guerrilla glee. Anders applauds the anarchy: code as contraband, creativity’s clandestine cabal.

Digital Diaspora: Internet’s Infinite Installations

Web’s weave widened worlds: JODI’s jetset glitches, Rafael Lozano-Hemmer’s responsive realms—browsers birthed boundless biennales. Printouts prized: AARON auctions at astronomic asks, affirming artifacts’ allure.

Generative Galas: GenAI’s Grand Gesture

Anders assays AI’s ascent: Midjourney’s mirages, DALL-E’s dreams—yet decries detachment, Dolly’s depthless depictions devoid of dialogue. Jeff Wall’s “A Sudden Gust of Wind” juxtaposed: human heft versus heuristic haze, where context conceals critique.

Anders’s axiom: art awakens awareness—ideas ignite, irrespective of instrument. His entreaty: etch eternally, hand hewn, honoring humanity’s hallowed hue.

Links:

PostHeaderIcon [DotJs2025] Coding and ADHD: Where We Excel

In tech’s torrent, where focus frays and novelty beckons, ADHD’s archetype—attention’s anarchy—often masquerades as malaise, yet harbors hidden harmonics for code’s cadence. Abbey Perini, a full-stack artisan and technical scribe from Atlanta’s tech thicket, celebrated these synergies at dotJS 2025. A Nexcor developer passionate about accessibility and advocacy, Abbey unpacked DSM-5’s deficits—deficit a misnomer for regulation’s riddle—subtypes’ spectrum (inattentive, impulsive, combined), reframing “disorder” as distress’s delimiter.

Abbey’s audit: ADHD’s allure in dev’s domain—dopamine’s deficit sated by puzzles’ pursuit, hyperfocus’s hurricane on hooks or heuristics. Rabbit holes reward: quirks queried, systems synthesized—Danny Donovan’s “itchy” unknowns quelled by Google’s grace. Creativity cascades: unconventional conundrums cracked, prototypes proliferating. Passion’s pendulum: “passionate programmer” badge, hobbies’ graveyard notwithstanding—novelty’s nectar, troubleshooting’s triumph.

Managerial missives: resets’ rapidity (forgetfulness as feature), sprints’ scaffolding (tickets as tethers, novelty’s nod). Praise’s potency: negativity’s nectar negated. Abbey’s anthem: fireworks in cubic confines—embrace eccentricity, harness hyperactivity for heuristics’ harvest.

Neurodiversity’s Nexus in Code

Abbey anatomized: DSM’s dated diction, subtypes’ shades—combined’s chaos, yet coding’s chemistry: dopamine drafts from debugging’s depths.

Strengths’ Spotlight and Strategies

Rabbit trails to resolutions, creativity’s cornucopia—Abbey’s arc: interviews’ “passion,” rabbit holes’ recall. Managerial mantra: sprints soothe, praise potentiates—ADHD’s assets amplified.

Links:

PostHeaderIcon Leading Through Reliability: Coaching, Mentoring, and Decision-Making Under Pressure

SRE leadership isn’t only about systems—it’s about people, processes, and resilience under fire.

1) Coaching Team Members Through Debugging

When junior engineers struggle with incidents, I walk them through the scientific method of debugging:

  1. Reproduce the problem.
  2. Collect evidence (logs, metrics, traces).
  3. Form a hypothesis.
  4. Test, measure, refine.

For example, in a memory leak case, I let a junior take the heap dump and explain findings, stepping in only to validate conclusions.

2) Introducing SRE Practices to New Teams

In teams without SRE culture, I start small:

  • Define a single SLO for a critical endpoint.
  • Introduce a burn-rate alert tied to that SLO.
  • Run a blameless postmortem after the first incident.

This creates buy-in without overwhelming the team with jargon.

3) Prioritizing and Delegating in High-Pressure Situations

During outages, prioritization is key:

  • Delegate evidence gathering (thread dumps, logs) to one engineer.
  • Keep communication flowing with stakeholders (status every 15 minutes).
  • Focus leadership on mitigation and rollback decisions.

After stabilization, I lead the postmortem, ensuring learnings feed back into automation, monitoring, and runbooks.

PostHeaderIcon [DevoxxGR2025] Email Re-Platforming Case Study

George Gkogkolis from Travelite Group shared a 15-minute case study at Devoxx Greece 2025 on re-platforming to process 1 million emails per hour.

The Challenge

Travelite Group, a global OTA handling flight tickets in 75 countries, processes 350,000 emails daily, expected to hit 2 million. Previously, a SaaS ticketing system struggled with growing traffic, poor licensing, and subpar user experience. Sharding the system led to complex agent logins and multiplexing issues with the booking engine. Market research revealed no viable alternatives, as vendors’ licensing models couldn’t handle the scale, prompting an in-house solution.

The New Platform

The team built a cloud-native, microservices-based platform within a year, going live in December 2024. It features a receiving app, a React-based web UI with Mantine Dev, a Spring Boot backend, and Amazon DocumentDB, integrated with Amazon SES and S3. Emails land in a Postfix server, are stored in S3, and processed via EventBridge and SQS. Data migration was critical, moving terabytes of EML files and databases in under two months, achieving a peak throughput of 1 million emails per hour by scaling to 50 receiver instances.

Lessons Learned

Starting with migration would have eased performance optimization, as synthetic data didn’t match production scale. Cloud-native deployment simplified scaling, and a backward-compatible API eased integration. Open standards (EML, Open API) ensured reliability. Future plans include AI and LLM enhancements by 2025, automating domain allocation for scalability.

Links

PostHeaderIcon Observability for Modern Systems: From Metrics to Traces

Good monitoring doesn’t just tell you when things are broken—it explains why.

1) White-Box vs Black-Box Monitoring

White-box: metrics from inside the system (CPU, memory, app metrics). Example: http_server_requests_seconds from Spring Actuator.

Black-box: synthetic probes simulating user behavior (ping APIs, load test flows). Example: periodic “buy flow” test in production.

2) Tracing Distributed Transactions

Use OpenTelemetry to propagate context across microservices:

// Spring Boot setup
implementation "io.opentelemetry:opentelemetry-exporter-otlp:1.30.0"

// Annotate spans
Span span = tracer.spanBuilder("checkout").startSpan();
try (Scope scope = span.makeCurrent()) {
    paymentService.charge(card);
    inventoryService.reserve(item);
} finally {
    span.end();
}

These traces flow into Jaeger or Grafana Tempo to visualize bottlenecks across services.

3) Example Dashboard for a High-Value Service

  • Availability: % successful requests (SLO vs actual).
  • Latency: p95/p99 end-to-end response times.
  • Error Rate: 4xx vs 5xx breakdown.
  • Dependency Health: DB latency, cache hit ratio, downstream service SLOs.
  • User metrics: active sessions, checkout success rate.

PostHeaderIcon [GoogleIO2024] What’s New in ChromeOS: Advancements in Accessibility and Performance

The landscape of personal computing continues to evolve, with ChromeOS at the forefront of delivering intuitive and robust experiences. Marisol Ryu, alongside Emilie Roberts and Sam Richard, outlined the platform’s ongoing mission to democratize powerful technology. Their discussion emphasized enhancements that cater to diverse user needs, from premium hardware integrations to refined app ecosystems, ensuring that simplicity and capability go hand in hand.

Expanding Access Through Premium Hardware and AI Features

Marisol highlighted the core philosophy of ChromeOS, which has remained steadfast since its inception nearly fifteen years ago: to provide straightforward yet potent computing solutions for a global audience. This vision manifests in the introduction of Chromebook Plus, a premium lineup designed to meet the demands of users seeking elevated performance without compromising affordability.

Collaborations with manufacturers such as Acer, Asus, HP, and Lenovo have yielded eight new models, each boasting double the processing power of top-selling devices from 2022. Starting at $399, these laptops make high-end computing more attainable. Beyond hardware, the “Plus” designation incorporates advanced Google AI functionalities, like “Help Me Write,” which assists in crafting or refining short-form content such as blog titles or video descriptions. Available soon for U.S. users, this tool exemplifies how AI can streamline everyday tasks, fostering creativity and productivity.

Emilie expanded on the integration of AI to personalize user interactions, noting features that adapt to individual workflows. This approach aligns with broader industry trends toward user-centric design, where technology anticipates needs rather than reacting to them. The emphasis on accessibility ensures that these advancements benefit a wide spectrum of users, from students to professionals.

Enhancing Web and Android App Ecosystems

Sam delved into optimizations for web applications, introducing “tab modes” that allow seamless switching between tabbed and windowed views. This flexibility enhances multitasking, particularly on larger screens, and reflects feedback from developers aiming to create more immersive experiences. Native-like install prompts further bridge the gap between web and desktop apps, encouraging users to engage more deeply.

For Android apps, testing and debugging tools have seen significant upgrades. The Android Emulator’s resizable window supports various form factors, including foldables and tablets, enabling developers to simulate real-world scenarios accurately. Integration with ChromeOS’s virtual machine ensures consistent performance across devices.

Gaming capabilities have also advanced, with “game controls” allowing customizable mappings for touch-only titles. This addresses input challenges on non-touch Chromebooks, making games accessible via keyboards, mice, or gamepads. “Game Capture” facilitates sharing screenshots and videos without disrupting gameplay, boosting social engagement and app visibility.

These improvements stem from close partnerships with developers, resulting in polished experiences that leverage ChromeOS’s strengths in security and speed.

Fostering Developer Collaboration and Future Innovations

The session underscored the importance of community feedback in shaping ChromeOS. Resources like the developer newsletter and RSS feed keep creators informed of updates, while platforms such as g.co/chromeosdev invite ongoing dialogue.

Looking ahead, the team envisions further AI integrations to enhance accessibility, such as adaptive interfaces for diverse abilities. By prioritizing inclusivity, ChromeOS continues to empower users worldwide, transforming curiosity into connection and creativity.

Links:

PostHeaderIcon Java/Spring Troubleshooting: From Memory Leaks to Database Bottlenecks

Practical strategies and hands-on tips for diagnosing and fixing performance issues in production Java applications.

1) Approaching Memory Leaks

Memory leaks in Java often manifest as OutOfMemoryError exceptions or rising heap usage visible in monitoring dashboards. My approach:

  1. Reproduce in staging: Apply the same traffic profile (e.g., JMeter load test).
  2. Collect a heap dump:
    jmap -dump:format=b,file=heap.hprof <PID>
  3. Analyze with tools: Eclipse MAT, VisualVM, or YourKit to detect uncollected references.
  4. Fix common causes:
    • Unclosed streams or ResultSets.
    • Static collections holding references.
    • Caches without eviction policies (e.g., replace HashMap with Caffeine).

2) Profiling and Fixing High CPU Usage

High CPU can stem from tight loops, inefficient queries, or excessive logging.

  • Step 1: Sample threads
    jstack <PID> > thread-dump.txt

    Identify “hot” threads consuming CPU.

  • Step 2: Profile with async profilers like async-profiler or Java Flight Recorder.
    java -XX:StartFlightRecording=duration=60s,filename=recording.jfr -jar app.jar
  • Step 3: Refactor:
    • Replace String concatenation in loops with StringBuilder.
    • Optimize regex (use Pattern reuse instead of String.matches()).
    • Review logging level (DEBUG inside loops is expensive).

3) Tuning GC for Low-Latency Services

Garbage collection (GC) can cause pauses. For trading, gaming, or API services, tuning matters:

  • Choose the right collector:
    • G1GC for balanced throughput and latency (default in recent JDKs).
    • ZGC or Shenandoah for ultra-low latency workloads (<10ms pauses).
  • Sample configs:
    -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:+ParallelRefProcEnabled
  • Monitor GC logs with GC Toolkit or Grafana dashboards.

4) Handling Database Bottlenecks

Spring apps often hit bottlenecks in DB queries rather than CPU.

  1. Enable SQL logging: in application.properties
    spring.jpa.show-sql=true
  2. Profile queries: Use p6spy or database AWR reports.
  3. Fixes:
    • Add missing indexes (EXPLAIN ANALYZE is your friend).
    • Batch inserts (saveAll() in Spring Data with hibernate.jdbc.batch_size).
    • Introduce caching (Spring Cache, Redis) for hot reads.
    • Use connection pools like HikariCP with tuned settings:
      spring.datasource.hikari.maximum-pool-size=30
Bottom line: Troubleshooting is both art and science—measure, hypothesize, fix, and validate with metrics.

PostHeaderIcon [DotAI2024] DotAI 2024: Marjolaine Grondin – AI as the Ultimate Entrepreneurial Ally

Marjolaine Grondin, trailblazing co-founder of Jam—pioneering Francophone chatbot, now woven into June Marketing’s warp—reflected on AI’s apostolic role at DotAI 2024. Forbes’ 30 Under 30 luminary and MIT’s Top Innovator Under 35, Grondin’s odyssey—from Sciences Po to Berkeley’s blaze, HEC’s honing—crested at Meta’s F8, the first femme founder to orate. Her homily: AI as alter ego, alchemist of aspirations, transfiguring toil into transcendence.

Rekindling the Spark: From Frustration to Fabrication

Grondin’s genesis: a decade dawning with Jean-Claude’s jeremiad—”no app surfeit”—propelling Jam’s pivot from platform to progeny, a student savant ahead of its epoch. Exit’s exhale: jettisoning Jira’s juggernauts, Trello’s tomes—embracing AI’s embrace, where prompts propel prototypes.

This liberation, she luminous, liberates luminaries: builders bereft of bots’ bounty, squandering sparks on scaffolding. Grondin’s gambit: bespoke bedfellows—Claude as confidant, charting charters; Midjourney as muse, manifesting mockups; Perplexity as polymath, probing precedents.

She shared serendipities: Claude’s counsel catalyzing company crystallizations—hypotheses honed, hazards hazarded—yielding validations velvety as velvet.

Embracing the Uneasy Endowment: Humanity’s Horizon

Grondin grappled with AI’s “unsettling boon”: routines relinquished, revealing rifts—what renders us rare? This disquiet, she divined, is destiny’s dispatch: urging uniqueness—curiosity’s caress, creativity’s conflagration, compassion’s core.

Meta’s Yan LeCun’s quip—”dumber than felines”—reiterated: AI augments, not annexes—propelling to “ikigai’s” interstice: passions pursued, proficiencies parlayed, planetary pleas placated.

Grondin’s gallery: app augmentations, arcana unlocked, tomes tendered, tennis with tots, tableaux transcendent. Her heuristic: harvest discomfort as catalyst, AI as accelerant—best lives beckoned.

In benediction, Grondin bestowed a boon: bespoke GPT genesis—tinker, tailor, transmute. AI, she avowed, isn’t usurper but usher—toward tapestries uniquely threaded.

Links:

PostHeaderIcon [DevoxxUK2025] Software Excellence in Large Orgs through Technical Coaching

Emily Bache, a seasoned technical coach, shared her expertise at DevoxxUK2025 on fostering software excellence in large organizations through technical coaching. Drawing on DORA research, which correlates high-quality code with faster delivery and better organizational outcomes, Emily emphasized practices like test-driven development (TDD) and refactoring to maintain code quality. She introduced technical coaching as a vital role, involving short, interactive learning hours and ensemble programming to build developer skills. Her talk, enriched with a refactoring demo and insights from Hartman’s proficiency taxonomy, offered a roadmap for organizations to reduce technical debt and enhance team performance.

The Importance of Code Quality

Emily began by referencing DORA research, which highlights capabilities like test automation, code maintainability, and small-batch development as predictors of high-performing teams. She cited a study by Adam Tornhill and Marcus Borie, showing that poor-quality code can increase development time by up to 124%, with worst-case scenarios taking nine times longer. Technical debt, or “cruft,” slows feature delivery and makes schedules unpredictable. Practices like TDD, refactoring, pair programming, and clean architecture are essential to maintain code quality, ensuring software remains flexible and cost-effective to modify over time.

Technical Coaching as a Solution

In large organizations, Emily noted a gap in technical leadership, with architects often focused on high-level design and teams lacking dedicated tech leads. Technical coaches bridge this gap, working part-time across teams to teach skills and foster a quality culture. Unlike code reviews, which reinforce existing knowledge, coaching proactively builds skills through hands-on training. Emily’s approach involves collaborating with architects and tech leads, aligning with organizational goals while addressing low-level design practices like TDD and refactoring, which are often neglected but critical for maintainable code.

Learning Hours for Skill Development

Emily’s learning hours are short, interactive sessions inspired by Sharon Bowman’s training techniques. Developers work in pairs on exercises, such as refactoring katas (e.g., Tennis Refactoring Kata), to practice skills like extracting methods and naming conventions. A demo showcased decomposing a complex method into readable, well-named functions, emphasizing deterministic refactoring tools over AI assistants, which excel at writing new code but struggle with refactoring. These sessions teach vocabulary for discussing code quality and provide checklists for applying skills, ensuring developers can immediately use what they learn.

Ensemble Programming for Real-World Application

Ensemble programming brings teams together to work on production code under a coach’s guidance. Unlike toy exercises, these sessions tackle real, complex problems, allowing developers to apply TDD and refactoring in context. Emily highlighted the collaborative nature of ensembles, where senior developers mentor juniors, fostering team learning. By addressing production code, coaches ensure skills translate to actual work, bridging the gap between training and practice. This approach helps teams internalize techniques like small-batch development and clean design, improving code quality incrementally.

Hartman’s Proficiency Taxonomy

Emily introduced Hartman’s proficiency taxonomy to explain skill acquisition, contrasting it with Bloom’s thinking-focused taxonomy. The stages—familiarity, comprehension, conscious effort, conscious action, proficiency, and expertise—map the journey from knowing a skill exists to applying it fluently in production. Learning hours help developers move from familiarity to conscious effort with exercises and feedback, while ensembles push them toward proficiency by applying skills to real code. Coaches tailor interventions based on a team’s proficiency level, ensuring steady progress toward mastery.

Getting Started with Technical Coaching

Emily encouraged organizations to adopt technical coaching, ideally led by tech leads with management support to allocate time for mentoring. She shared resources from her Samman Coaching website, including kata descriptions and learning hour guides, available through her nonprofit society for technical coaches. For mixed-experience teams, she pairs senior developers with juniors to foster mentoring, turning diversity into a strength. Her book, Samman Technical Coaching, and monthly online meetups provide further support for aspiring coaches, aiming to spread best practices and elevate code quality across organizations.

Links: