Recent Posts
Archives

Posts Tagged ‘AWS’

PostHeaderIcon [DefCon32] Atomic Honeypot: A MySQL Honeypot That Drops Shells

Alexander Rubin and Martin Rakhmanov, security engineers at Amazon Web Services’ RDS Red Team, present a groundbreaking MySQL honeypot designed to counterattack malicious actors. Leveraging vulnerabilities CVE-2023-21980 and CVE-2024-21096, their “Atomic Honeypot” exploits attackers’ systems, uncovering new attack vectors. Alexander and Martin demonstrate how this active defense mechanism turns the tables on adversaries targeting database servers.

Designing an Active Defense Honeypot

Alexander introduces the Atomic Honeypot, a high-interaction MySQL server that mimics legitimate databases to attract bots. Unlike passive honeypots, this system exploits vulnerabilities in MySQL’s client programs (CVE-2023-21980) and mysqldump utility (CVE-2024-21096), enabling remote code execution on attackers’ systems. Their approach, detailed at DEF CON 32, uses a chain of three vulnerabilities, including an arbitrary file read, to analyze and counterattack malicious code.

Exploiting Attacker Systems

Martin explains the technical mechanics, focusing on the MySQL protocol’s server-initiated nature, which allows their honeypot to manipulate client connections. By crafting a rogue server, they executed command injections, downloading attackers’ Python scripts designed for brute-forcing passwords and data exfiltration. This enabled Alexander and Martin to study attacker behavior, uncovering two novel MySQL attack vectors.

Ethical and Practical Implications

The duo addresses the ethical considerations of active defense, emphasizing responsible use to avoid collateral damage. Their honeypot, which requires no specialized tools and can be set up with a vulnerable MySQL instance, empowers researchers to replicate their findings. However, Martin notes that Oracle’s recent patches may limit the window for experimentation, urging swift action by the community.

Future of Defensive Security

Concluding, Alexander advocates for integrating active defense into cybersecurity strategies, highlighting the honeypot’s ability to provide actionable intelligence. Their work, supported by AWS, inspires researchers to explore innovative countermeasures, strengthening database security against relentless bot attacks. By sharing their exploit chain, Alexander and Martin pave the way for proactive defense mechanisms.

Links:

PostHeaderIcon Using Redis as a Shared Cache in AWS: Architecture, Code, and Best Practices

In today’s distributed, cloud-native environments, shared caching is no longer an optimization—it’s a necessity. Whether you’re scaling out web servers, deploying stateless containers, or orchestrating microservices in Kubernetes, a centralized, fast-access cache is a cornerstone for performance and resilience.

This post explores why Redis, especially via Amazon ElastiCache, is an exceptional choice for this use case—and how you can use it in production-grade AWS architectures.

🔧 Why Use Redis for Shared Caching?

Redis (REmote DIctionary Server) is an in-memory key-value data store renowned for:

  • Lightning-fast performance (sub-millisecond)
  • Built-in data structures: Lists, Sets, Hashes, Sorted Sets, Streams
  • Atomic operations: Perfect for counters, locks, session control
  • TTL and eviction policies: Cache data that expires automatically
  • Wide language support: Python, Java, Node.js, Go, and more

☁️ Redis in AWS: Use ElastiCache for Simplicity & Scale

Instead of self-managing Redis on EC2, AWS offers Amazon ElastiCache for Redis:

  • Fully managed Redis with patching, backups, monitoring
  • Multi-AZ support with automatic failover
  • Clustered mode for horizontal scaling
  • Encryption, VPC isolation, IAM authentication

ElastiCache enables you to focus on application logic, not infrastructure.

🌐 Real-World Use Cases

Use Case How Redis Helps
Session Sharing Store auth/session tokens accessible by all app instances
Rate Limiting Atomic counters (INCR) enforce per-user quotas
Leaderboards Sorted sets track rankings in real-time
Caching SQL Results Avoid repetitive DB hits with cache-aside pattern
Queues Lightweight task queues using LPUSH / BRPOP

📈 Architecture Pattern: Cache-Aside with Redis

Here’s the common cache-aside strategy:

  1. App queries Redis for a key.
  2. If hit ✅, return cached value.
  3. If miss ❌, query DB, store result in Redis.

Python Example with redis and psycopg2:

import redis
import psycopg2
import json

r = redis.Redis(host='my-redis-host', port=6379, db=0)
conn = psycopg2.connect(dsn="...")

def get_user(user_id):
    cached = r.get(f"user:{user_id}")
    if cached:
        return json.loads(cached)

    with conn.cursor() as cur:
        cur.execute("SELECT id, name FROM users WHERE id = %s", (user_id,))
        user = cur.fetchone()
        if user:
            r.setex(f"user:{user_id}", 3600, json.dumps({'id': user[0], 'name': user[1]}))
        return user

🌍 Multi-Tiered Caching

To reduce Redis load and latency further:

  • Tier 1: In-process (e.g., Guava, Caffeine)
  • Tier 2: Redis (ElastiCache)
  • Tier 3: Database (RDS, DynamoDB)

This pattern ensures that most reads are served from memory.

⚠️ Common Pitfalls to Avoid

Mistake Fix
Treating Redis as a DB Use RDS/DynamoDB for persistence
No expiration Always set TTLs to avoid memory pressure
No HA Use ElastiCache Multi-AZ with automatic failover
Poor security Use VPC-only access, enable encryption/auth

🌐 Bonus: Redis for Lambda

Lambda is stateless, so Redis is perfect for:

  • Shared rate limiting
  • Caching computed values
  • Centralized coordination

Use redis-py, ioredis, or lettuce in your function code.

🔺 Conclusion

If you’re building modern apps on AWS, ElastiCache with Redis is a must-have for state sharing, performance, and reliability. It plays well with EC2, ECS, Lambda, and everything in between. It’s mature, scalable, and robust.

Whether you’re running a high-scale SaaS or a small internal app, Redis gives you a major performance edge without locking you into complexity.

PostHeaderIcon [DefCon32] AWS CloudQuarry: Digging for Secrets in Public AMIs

Eduard Agavriloae and Matei Josephs, security researchers from KPMG Romania and Syncubes, present a chilling exploration of vulnerabilities in public Amazon Machine Images (AMIs). Their project, scanning 3.1 million AMIs, uncovered exposed AWS access credentials, posing risks of account takeovers. Eduard and Matei share their methodologies and advocate for robust cloud security practices to mitigate these threats.

Uncovering Secrets in Public AMIs

Eduard opens by detailing their CloudQuarry project, which scanned millions of public AMIs using tools like ScoutSuite. They discovered critical findings, such as exposed access keys, that could enable attackers to compromise AWS accounts. Supported by KPMG Romania, Eduard and Matei’s research highlights the pervasive issue of misconfigured cloud resources, a problem they believe will persist due to human error.

Methodologies and Tools

Matei explains their approach, leveraging automated tools to identify public AMIs and extract sensitive data. Their analysis revealed credentials embedded in AMIs, often overlooked by organizations. By responsibly disclosing findings to affected parties, Eduard and Matei avoided exploiting these keys, demonstrating ethical restraint while highlighting the potential for malicious actors to cause widespread damage.

Risks of Account Takeover

The duo delves into the consequences of exposed credentials, which could lead to unauthorized access, data breaches, or ransomware attacks. Their findings, shared with companies expecting only T-shirts in return, underscore the ease of exploiting public AMIs. Eduard emphasizes the adrenaline rush of discovering such vulnerabilities, reflecting the stakes in cloud security.

Strengthening Cloud Security

Concluding, Matei advocates for enhanced configuration reviews and automated monitoring to prevent AMI exposures. Their collaborative approach, inviting community feedback, reinforces the importance of collective vigilance in securing cloud environments. By sharing their tools and lessons, Eduard and Matei empower organizations to fortify their AWS deployments against emerging threats.

Links:

PostHeaderIcon AWS S3 Warning: “No Content Length Specified for Stream Data” – What It Means and How to Fix It

If you’re working with the AWS SDK for Java and you’ve seen the following log message:

WARN --- AmazonS3Client : No content length specified for stream data. Stream contents will be buffered in memory and could result in out of memory errors.

…you’re not alone. This warning might seem harmless at first, but it can lead to serious issues, especially in production environments.

What’s Really Happening?

This message appears when you upload a stream to Amazon S3 without explicitly setting the content length in the request metadata.

When that happens, the SDK doesn’t know how much data it’s about to upload, so it buffers the entire stream into memory before sending it to S3. If the stream is large, this could lead to:

  • Excessive memory usage
  • Slow performance
  • OutOfMemoryError crashes

✅ How to Fix It

Whenever you upload a stream, make sure you calculate and set the content length using ObjectMetadata.

Example with Byte Array:

byte[] bytes = ...; // your content
ByteArrayInputStream inputStream = new ByteArrayInputStream(bytes);

ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(bytes.length);

PutObjectRequest request = new PutObjectRequest(bucketName, key, inputStream, metadata);
s3Client.putObject(request);

Example with File:

File file = new File("somefile.txt");
FileInputStream fileStream = new FileInputStream(file);

ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(file.length());

PutObjectRequest request = new PutObjectRequest(bucketName, key, fileStream, metadata);
s3Client.putObject(request);

What If You Don’t Know the Length?

Sometimes, you can’t know the content length ahead of time (e.g., you’re piping data from another service). In that case:

  • Write the stream to a ByteArrayOutputStream first (good for small data)
  • Use the S3 Multipart Upload API to stream large files without specifying the total size

Conclusion

Always set the content length when uploading to S3 via streams. It’s a small change that prevents large-scale problems down the road.

By taking care of this up front, you make your service safer, more memory-efficient, and more scalable.

Got questions or dealing with tricky S3 upload scenarios? Drop them in the comments!

PostHeaderIcon [DevoxxPL2022] From Private Through Hybrid to Public Cloud – Product Migration • Paweł Piekut

At Devoxx Poland 2022, Paweł Piekut, a seasoned software developer at Bosch, delivered an insightful presentation on the migration of their e-bike cloud platform from a private cloud to a public cloud environment. Drawing from his expertise in Java, Kotlin, and .NET, Paweł narrated the intricate journey of transitioning a complex IoT ecosystem, highlighting the technical challenges, strategic decisions, and lessons learned. His talk offered a practical roadmap for organizations navigating the complexities of cloud migration, emphasizing the balance between innovation, scalability, and compliance.

Navigating the Private Cloud Landscape

Paweł began by outlining the initial deployment of Bosch’s e-bike cloud on a private cloud developed internally by the company’s IT group. This proprietary platform, designed to support the e-bike ecosystem, facilitated communication between hardware components—such as drive units, batteries, and controllers—and the mobile app, which interfaced with the cloud. The cloud served multiple stakeholders, including factories for device flashing, manufacturers for configuration, authorized services for diagnostics, and end-users for features like activity tracking and bike locking. However, the private cloud faced significant limitations. Scalability was constrained, requiring manual capacity requests and investments, which hindered agility. Downtimes were frequent, acceptable for development but untenable for production. Additionally, the platform’s bespoke nature made it challenging to hire experienced talent and limited developer engagement due to its lack of market-standard tools.

Despite these drawbacks, the private cloud offered advantages. Its deployment within Bosch’s secure network ensured high performance and simplified compliance with data privacy regulations, critical for an international product subject to data localization laws. Costs were predictable, and the absence of vendor lock-in, thanks to open-source frameworks, provided flexibility. However, the need for modern scalability and developer-friendly tools drove the decision to explore public cloud solutions, with Amazon Web Services (AWS) selected for its robust support.

The Hybrid Cloud Conundrum

Transitioning to a hybrid cloud model introduced a blend of private and public cloud environments, creating new challenges. Bosch’s internal policy of “on-transit data” required data processed in the public cloud to be returned to the private cloud, necessitating complex and secure data transfers. While AWS Direct Connect facilitated this, the hybrid setup led to operational complexities. Only select services ran on AWS, causing a divide among developers eager to work with widely recognized public cloud tools. Technical issues, such as Kafka’s inaccessibility from the private cloud, required significant effort to resolve. Error tracing across clouds was cumbersome, with Splunk used in the private cloud and Elasticsearch in the public cloud, complicating root-cause analysis. The simultaneous migration of Jenkins added further complexity, with duplicated jobs and confusing configurations.

Despite these hurdles, the hybrid model offered benefits. It allowed Bosch to leverage the private cloud’s security for sensitive data while tapping into the public cloud’s scalability for peak loads. This setup supported disaster recovery and compliance with data localization requirements. However, the on-transit data concept proved overly complex, leading to dissatisfaction and prompting a strategic shift toward a cloud-first approach, prioritizing public cloud deployment unless justified otherwise.

Embracing the Public Cloud

The full migration to AWS marked a pivotal phase, divided into three stages. First, the team focused on exploration and training to master AWS products and the pay-as-you-go pricing model, which made every developer accountable for costs. This stage emphasized understanding managed versus unmanaged services, such as Kubernetes and Kafka, and ensuring backup compatibility across clouds. The second stage involved building new applications on AWS, addressing unknowns and ensuring secure communication with external systems. Finally, existing services were migrated from private to public cloud, starting with development and progressing to production. Throughout, the team maintained services in both environments, managing separate repositories and addressing critical bugs, such as Log4j vulnerabilities, across both.

To mitigate vendor lock-in, Bosch adopted a cloud-agnostic approach, using Terraform for infrastructure-as-code instead of AWS-specific CloudFormation. While tools like S3 and DynamoDB were embraced for their market-leading performance, backups were standardized to ensure portability. The public cloud’s vast community, extensive documentation, and readily available resources reduced knowledge silos and enhanced developer satisfaction, making the migration a transformative step for innovation and agility.

Lessons for Cloud Migration

Paweł’s experience underscores the importance of aligning cloud strategy with organizational needs. The public cloud’s immediate resource availability and developer-friendly tools accelerated development, but required careful cost management. Hybrid cloud offered flexibility but introduced complexity, particularly with data transfers. Private cloud provided security and control but lacked scalability. Paweł emphasized defining precise requirements—budget, priorities, and compliance—before choosing a cloud model. Startups may favor public clouds for agility, while regulated industries might opt for private or hybrid solutions to prioritize data security and network performance. This strategic clarity ensures a successful migration tailored to business goals.

Links:

PostHeaderIcon [PHPForumParis2021] Migrating a Bank-as-a-Service to Serverless – Louis Pinsard

Louis Pinsard, an engineering manager at Theodo, captivated the Forum PHP 2021 audience with a detailed recounting of his journey migrating a Bank-as-a-Service platform to a serverless architecture. Having returned to PHP after a hiatus, Louis shared his experience leveraging AWS serverless technologies to enhance scalability and reliability in a high-stakes financial environment. His narrative, rich with practical insights, illuminated the challenges and triumphs of modernizing critical systems. This post explores four key themes: the rationale for serverless, leveraging AWS tools, simplifying with Bref, and addressing migration challenges.

The Rationale for Serverless

Louis Pinsard opened by explaining the motivation behind adopting a serverless architecture for a Bank-as-a-Service platform at Theodo. Traditional server-based systems struggled with scalability and maintenance under the unpredictable demands of financial transactions. Serverless, with its pay-per-use model and automatic scaling, offered a solution to handle variable workloads efficiently. Louis highlighted how this approach reduced infrastructure management overhead, allowing his team to focus on business logic and deliver a robust, cost-effective platform.

Leveraging AWS Tools

A significant portion of Louis’s talk focused on the use of AWS services like Lambda and SQS to build a resilient system. He described how Lambda functions enabled event-driven processing, while SQS managed asynchronous message queues to handle transaction retries seamlessly. By integrating these tools, Louis’s team at Theodo ensured high availability and fault tolerance, critical for financial applications. His practical examples demonstrated how AWS’s native services simplified complex workflows, enhancing the platform’s performance and reliability.

Simplifying with Bref

Louis discussed the role of Bref, a PHP framework for serverless applications, in streamlining the migration process. While initially hesitant due to concerns about complexity, he found Bref to be a lightweight layer over AWS, making it nearly transparent for developers familiar with serverless concepts. Louis emphasized that Bref’s simplicity allowed his team to deploy PHP code efficiently, reducing the learning curve and enabling rapid development without sacrificing robustness, even in a demanding financial context.

Addressing Migration Challenges

Concluding his presentation, Louis addressed the challenges of migrating a legacy system to serverless, including team upskilling and managing dependencies. He shared how his team adopted AWS CloudFormation for infrastructure-as-code, simplifying deployments. Responding to an audience question, Louis noted that Bref’s minimal overhead made it a viable choice over native AWS SDKs for PHP developers. His insights underscored the importance of strategic planning and incremental adoption to ensure a smooth transition, offering valuable lessons for similar projects.

Links:

PostHeaderIcon [NodeCongress2021] Introduction to the AWS CDK: Infrastructure as Node – Colin Ihrig

In the evolving landscape of cloud computing, developers increasingly seek tools that bridge the gap between application logic and underlying infrastructure. Colin Ihrig’s exploration of the AWS Cloud Development Kit (CDK) offers a compelling entry point into this domain, emphasizing how Node.js enthusiasts can harness familiar programming paradigms to orchestrate cloud resources seamlessly. By transforming abstract infrastructure concepts into executable code, the CDK empowers teams to move beyond cumbersome templates, fostering agility in deployment pipelines.

The CDK stands out as an AWS-centric framework for infrastructure as code, akin to established solutions like Terraform but tailored for those versed in high-level languages. Supporting JavaScript, TypeScript, Python, Java, and C#, it abstracts the intricacies of CloudFormation—the AWS service for defining and provisioning resources via JSON or YAML—into intuitive, object-oriented constructs. This abstraction not only simplifies the creation of scalable stacks but also preserves CloudFormation’s core advantages, such as consistent deployments and drift detection, where configurations are automatically reconciled with actual states.

Streamlining Cloud Architecture with Node.js Constructs

At its core, the CDK operates through a hierarchy of reusable building blocks called constructs, which encapsulate AWS services like S3 buckets, Lambda functions, or EC2 instances. Colin illustrates this with a straightforward Node.js example: instantiating a basic S3 bucket involves minimal lines of code, contrasting sharply with the verbose CloudFormation equivalents that often span pages. This approach leverages Node.js’s event-driven nature, allowing developers to define dependencies declaratively while integrating seamlessly with existing application codebases.

One of the CDK’s strengths lies in its synthesis process, where high-level definitions compile into CloudFormation templates during the “synth” phase. This generated assembly includes not just templates but also ancillary artifacts, such as bundled Docker images for Lambda deployments. For Node.js practitioners, this means unit testing infrastructure alongside application logic—employing Jest for snapshot validation of synthesized outputs—without ever leaving the familiar ecosystem. Colin’s demonstration underscores how such integration reduces context-switching, enabling rapid iteration on cloud-native designs like serverless APIs or data pipelines.

Moreover, the CDK’s asset management handles local files and images destined for S3 or ECR, necessitating a one-time bootstrapping per environment. This setup deploys a dedicated toolkit stack, complete with storage buckets and IAM roles, ensuring secure asset uploads. While incurring nominal AWS charges, it streamlines workflows, as evidenced by Colin’s walkthrough of provisioning a static website: a few constructs deploy a public-read bucket, sync local assets, and expose the site via a custom domain—potentially augmented with Route 53 for DNS or CloudFront for edge caching.

Navigating Deployment Cycles and Best Practices

Deployment via the CDK CLI mirrors npm workflows, with commands like “cdk deploy” orchestrating updates intelligently, applying only deltas to minimize disruption. Colin highlights the CLI’s versatility—listing stacks with “cdk ls,” diffing changes via “cdk diff,” or injecting runtime context for dynamic configurations—positioning it as an extension of Node.js tooling. For cleanup, “cdk destroy” reverses provisions, though manual verification in the AWS console is advisable, given occasional bootstrap remnants.

Colin wraps by addressing adoption barriers, noting the CDK’s maturity since its 2019 general availability and its freedom from vendor lock-in—given AWS’s ubiquity among cloud-native developers. Drawing from a Cloud Native Computing Foundation survey, he points to JavaScript’s dominance in server-side environments and AWS’s 62% market share, arguing that the CDK aligns perfectly with Node.js’s ethos of unified tooling across frontend, backend, and operations.

Through these insights, Colin not only demystifies infrastructure provisioning but also inspires Node.js developers to embrace declarative coding for resilient, observable systems. Whether scaling monoliths to microservices or experimenting with ephemeral environments, the CDK emerges as a pivotal ally in modern cloud engineering.

Links:

PostHeaderIcon [DevoxxFR 2018] Deploying Microservices on AWS: Compute Options Explored at Devoxx France 2018

At Devoxx France 2018, Arun Gupta and Tiffany Jernigan, both from Amazon Web Services (AWS), delivered a three-hour deep-dive session titled Compute options for Microservices on AWS. This hands-on tutorial explored deploying a microservices-based application using various AWS compute options: EC2, Amazon Elastic Container Service (ECS), AWS Fargate, Elastic Kubernetes Service (EKS), and AWS Lambda. Through a sample application with web app, greeting, and name microservices, they demonstrated local testing, deployment pipelines, service discovery, monitoring, and canary deployments. The session, rich with code demos, is available on YouTube, with code and slides on GitHub.

Microservices: Solving Business Problems

Arun Gupta opened by addressing the monolith vs. microservices debate, emphasizing that the choice depends on business needs. Microservices enable agility, frequent releases, and polyglot environments but introduce complexity. AWS simplifies this with managed services, allowing developers to focus on business logic. The demo application featured three microservices: a public-facing web app, and internal greeting and name services, communicating via REST endpoints. Built with WildFly Swarm, a Java EE-compliant server, the application produced a portable fat JAR, deployable as a container or Lambda function. The presenters highlighted service discovery, ensuring the web app could locate stateless instances of greeting and name services.

EC2: Full Control for Traditional Deployments

Amazon EC2 offers developers complete control over virtual machines, ideal for those needing to manage the full stack. The presenters deployed the microservices on EC2 instances, running WildFly Swarm JARs. Using Maven and a Docker profile, they generated container images, pushed to Docker Hub, and tested locally with Docker Compose. A docker stack deploy command spun up the services, accessible via curl localhost:8080, returning responses like “hello Sheldon.” EC2 requires manual scaling and cluster management, but its flexibility suits custom stacks. The GitHub repo includes configurations for EC2 deployments, showcasing integration with AWS services like CloudWatch for logging.

Amazon ECS: Orchestrating Containers

Amazon ECS simplifies container orchestration, managing scheduling and scaling. The presenters created an ECS cluster in the AWS Management Console, defining task definitions for the three microservices. Task definitions specified container images, CPU, and memory, with an Application Load Balancer (ALB) enabling path-based routing (e.g., /resources/greeting). Using the ECS CLI, they deployed services, ensuring high availability across multiple availability zones. CloudWatch integration provided metrics and logs, with alarms for monitoring. ECS reduces operational overhead compared to EC2, balancing control and automation. The session highlighted ECS’s deep integration with AWS services, streamlining production workloads.

AWS Fargate: Serverless Containers

Introduced at re:Invent 2017, AWS Fargate abstracts server management, allowing developers to focus on containers. The presenters deployed the same microservices using Fargate, specifying task definitions with AWS VPC networking for fine-grained security. The Fargate CLI, a GitHub project by AWS’s John Pignata, simplified setup, creating ALBs and task definitions automatically. A curl to the load balancer URL returned responses like “howdy Penny.” Fargate’s per-second billing and task-level resource allocation optimize costs. Available initially in US East (N. Virginia), Fargate suits developers prioritizing simplicity. The session emphasized its role in reducing infrastructure management.

Elastic Kubernetes Service (EKS): Kubernetes on AWS

EKS, in preview during the session, brings managed Kubernetes to AWS. The presenters deployed the microservices on an EKS cluster, using kubectl to manage pods and services. They introduced Istio, a service mesh, to handle traffic routing and observability. Istio’s sidecar containers enabled 50/50 traffic splits between “hello” and “howdy” versions of the greeting service, configured via YAML manifests. Chaos engineering was demonstrated by injecting 5-second delays in 10% of requests, testing resilience. AWS X-Ray, integrated via a daemon set, provided service maps and traces, identifying bottlenecks. EKS, later supporting Fargate, offers flexibility for Kubernetes users. The GitHub repo includes EKS manifests and Istio configurations.

AWS Lambda: Serverless Microservices

AWS Lambda enables serverless deployments, eliminating server management. The presenters repurposed the WildFly Swarm application for Lambda, using the Serverless Application Model (SAM). Each microservice became a Lambda function, fronted by API Gateway endpoints (e.g., /greeting). SAM templates defined functions, APIs, and DynamoDB tables, with sam local start-api testing endpoints locally via Dockerized Lambda runtimes. Responses like “howdy Sheldon” were verified with curl localhost:3000. SAM’s package and deploy commands uploaded functions to S3, while canary deployments shifted traffic (e.g., 10% to new versions) with CloudWatch alarms. Lambda’s per-second billing and 300-second execution limit suit event-driven workloads. The session showcased SAM’s integration with AWS services and the Serverless Application Repository.

Deployment Pipelines: Automating with AWS CodePipeline

The presenters built a deployment pipeline using AWS CodePipeline, a managed service inspired by Amazon’s internal tooling. A GitHub push triggered the pipeline, which used CodeCommit to build Docker images, pushed them to Amazon Elastic Container Registry (ECR), and deployed to an ECS cluster. For Lambda, SAM templates were packaged and deployed. CloudFormation templates automated resource creation, including VPCs, subnets, and ALBs. The pipeline ensured immutable deployments with commit-based image tags, maintaining production stability. The GitHub repo provides CloudFormation scripts, enabling reproducible environments. This approach minimizes manual intervention, supporting rapid iteration.

Monitoring and Logging: AWS X-Ray and CloudWatch

Monitoring was a key focus, with AWS X-Ray providing end-to-end tracing. In ECS and EKS, X-Ray daemons collected traces, generating service maps showing web app, greeting, and name interactions. For Lambda, X-Ray was enabled natively via SAM templates. CloudWatch offered metrics (e.g., CPU usage) and logs, with alarms for thresholds. In EKS, Kubernetes tools like Prometheus and Grafana were mentioned, but X-Ray’s integration with AWS services was emphasized. The presenters demonstrated debugging Lambda functions locally using SAM CLI and IntelliJ, enhancing developer agility. These tools ensure observability, critical for distributed microservices.

Choosing the Right Compute Option

The session concluded by comparing compute options. EC2 offers maximum control but requires managing scaling and updates. ECS balances automation and flexibility, ideal for containerized workloads. Fargate eliminates server management, suiting simple deployments. EKS caters to Kubernetes users, with Istio enhancing observability. Lambda, best for event-driven microservices, minimizes operational overhead but has execution limits. Factors like team expertise, application requirements, and cost influence the choice. The presenters encouraged feedback via GitHub issues to shape AWS’s roadmap. Visit aws.amazon.com/containers for more.

Links:

Hashtags: #AWS #Microservices #ECS #Fargate #EKS #Lambda #DevoxxFR2018 #ArunGupta #TiffanyJernigan #CloudComputing