Recent Posts
Archives

Archive for the ‘en-US’ Category

PostHeaderIcon [DevoxxFR 2018] Deploying Microservices on AWS: Compute Options Explored at Devoxx France 2018

At Devoxx France 2018, Arun Gupta and Tiffany Jernigan, both from Amazon Web Services (AWS), delivered a three-hour deep-dive session titled Compute options for Microservices on AWS. This hands-on tutorial explored deploying a microservices-based application using various AWS compute options: EC2, Amazon Elastic Container Service (ECS), AWS Fargate, Elastic Kubernetes Service (EKS), and AWS Lambda. Through a sample application with web app, greeting, and name microservices, they demonstrated local testing, deployment pipelines, service discovery, monitoring, and canary deployments. The session, rich with code demos, is available on YouTube, with code and slides on GitHub.

Microservices: Solving Business Problems

Arun Gupta opened by addressing the monolith vs. microservices debate, emphasizing that the choice depends on business needs. Microservices enable agility, frequent releases, and polyglot environments but introduce complexity. AWS simplifies this with managed services, allowing developers to focus on business logic. The demo application featured three microservices: a public-facing web app, and internal greeting and name services, communicating via REST endpoints. Built with WildFly Swarm, a Java EE-compliant server, the application produced a portable fat JAR, deployable as a container or Lambda function. The presenters highlighted service discovery, ensuring the web app could locate stateless instances of greeting and name services.

EC2: Full Control for Traditional Deployments

Amazon EC2 offers developers complete control over virtual machines, ideal for those needing to manage the full stack. The presenters deployed the microservices on EC2 instances, running WildFly Swarm JARs. Using Maven and a Docker profile, they generated container images, pushed to Docker Hub, and tested locally with Docker Compose. A docker stack deploy command spun up the services, accessible via curl localhost:8080, returning responses like “hello Sheldon.” EC2 requires manual scaling and cluster management, but its flexibility suits custom stacks. The GitHub repo includes configurations for EC2 deployments, showcasing integration with AWS services like CloudWatch for logging.

Amazon ECS: Orchestrating Containers

Amazon ECS simplifies container orchestration, managing scheduling and scaling. The presenters created an ECS cluster in the AWS Management Console, defining task definitions for the three microservices. Task definitions specified container images, CPU, and memory, with an Application Load Balancer (ALB) enabling path-based routing (e.g., /resources/greeting). Using the ECS CLI, they deployed services, ensuring high availability across multiple availability zones. CloudWatch integration provided metrics and logs, with alarms for monitoring. ECS reduces operational overhead compared to EC2, balancing control and automation. The session highlighted ECS’s deep integration with AWS services, streamlining production workloads.

AWS Fargate: Serverless Containers

Introduced at re:Invent 2017, AWS Fargate abstracts server management, allowing developers to focus on containers. The presenters deployed the same microservices using Fargate, specifying task definitions with AWS VPC networking for fine-grained security. The Fargate CLI, a GitHub project by AWS’s John Pignata, simplified setup, creating ALBs and task definitions automatically. A curl to the load balancer URL returned responses like “howdy Penny.” Fargate’s per-second billing and task-level resource allocation optimize costs. Available initially in US East (N. Virginia), Fargate suits developers prioritizing simplicity. The session emphasized its role in reducing infrastructure management.

Elastic Kubernetes Service (EKS): Kubernetes on AWS

EKS, in preview during the session, brings managed Kubernetes to AWS. The presenters deployed the microservices on an EKS cluster, using kubectl to manage pods and services. They introduced Istio, a service mesh, to handle traffic routing and observability. Istio’s sidecar containers enabled 50/50 traffic splits between “hello” and “howdy” versions of the greeting service, configured via YAML manifests. Chaos engineering was demonstrated by injecting 5-second delays in 10% of requests, testing resilience. AWS X-Ray, integrated via a daemon set, provided service maps and traces, identifying bottlenecks. EKS, later supporting Fargate, offers flexibility for Kubernetes users. The GitHub repo includes EKS manifests and Istio configurations.

AWS Lambda: Serverless Microservices

AWS Lambda enables serverless deployments, eliminating server management. The presenters repurposed the WildFly Swarm application for Lambda, using the Serverless Application Model (SAM). Each microservice became a Lambda function, fronted by API Gateway endpoints (e.g., /greeting). SAM templates defined functions, APIs, and DynamoDB tables, with sam local start-api testing endpoints locally via Dockerized Lambda runtimes. Responses like “howdy Sheldon” were verified with curl localhost:3000. SAM’s package and deploy commands uploaded functions to S3, while canary deployments shifted traffic (e.g., 10% to new versions) with CloudWatch alarms. Lambda’s per-second billing and 300-second execution limit suit event-driven workloads. The session showcased SAM’s integration with AWS services and the Serverless Application Repository.

Deployment Pipelines: Automating with AWS CodePipeline

The presenters built a deployment pipeline using AWS CodePipeline, a managed service inspired by Amazon’s internal tooling. A GitHub push triggered the pipeline, which used CodeCommit to build Docker images, pushed them to Amazon Elastic Container Registry (ECR), and deployed to an ECS cluster. For Lambda, SAM templates were packaged and deployed. CloudFormation templates automated resource creation, including VPCs, subnets, and ALBs. The pipeline ensured immutable deployments with commit-based image tags, maintaining production stability. The GitHub repo provides CloudFormation scripts, enabling reproducible environments. This approach minimizes manual intervention, supporting rapid iteration.

Monitoring and Logging: AWS X-Ray and CloudWatch

Monitoring was a key focus, with AWS X-Ray providing end-to-end tracing. In ECS and EKS, X-Ray daemons collected traces, generating service maps showing web app, greeting, and name interactions. For Lambda, X-Ray was enabled natively via SAM templates. CloudWatch offered metrics (e.g., CPU usage) and logs, with alarms for thresholds. In EKS, Kubernetes tools like Prometheus and Grafana were mentioned, but X-Ray’s integration with AWS services was emphasized. The presenters demonstrated debugging Lambda functions locally using SAM CLI and IntelliJ, enhancing developer agility. These tools ensure observability, critical for distributed microservices.

Choosing the Right Compute Option

The session concluded by comparing compute options. EC2 offers maximum control but requires managing scaling and updates. ECS balances automation and flexibility, ideal for containerized workloads. Fargate eliminates server management, suiting simple deployments. EKS caters to Kubernetes users, with Istio enhancing observability. Lambda, best for event-driven microservices, minimizes operational overhead but has execution limits. Factors like team expertise, application requirements, and cost influence the choice. The presenters encouraged feedback via GitHub issues to shape AWS’s roadmap. Visit aws.amazon.com/containers for more.

Links:

Hashtags: #AWS #Microservices #ECS #Fargate #EKS #Lambda #DevoxxFR2018 #ArunGupta #TiffanyJernigan #CloudComputing

PostHeaderIcon [DevoxxFR 2017] Introduction to the Philosophy of Artificial Intelligence

The rapid advancements and increasing integration of artificial intelligence into various aspects of our lives raise fundamental questions that extend beyond the purely technical realm into the domain of philosophy. As machines become capable of performing tasks that were once considered uniquely human, such as understanding language, recognizing patterns, and making decisions, we are prompted to reconsider our definitions of intelligence, consciousness, and even what it means to be human. At DevoxxFR 2017, Eric Lefevre Ardant and Sonia Ouchtar offered a thought-provoking introduction to the philosophy of artificial intelligence, exploring key concepts and thought experiments that challenge our understanding of machine intelligence and its potential implications.

Eric and Sonia began by acknowledging the pervasive presence of “AI” in contemporary discourse, noting that the term is often used broadly to encompass everything from simple algorithms to hypothetical future superintelligence. They stressed the importance of developing a critical perspective on these discussions and acquiring the vocabulary necessary to engage with the deeper philosophical questions surrounding AI. Their talk aimed to move beyond the hype and delve into the core questions that philosophers have grappled with as the possibility of machine intelligence has become more concrete.

The Turing Test: A Criterion for Machine Intelligence?

A central focus of the presentation was the Turing Test, proposed by Alan Turing in 1950 as a way to determine if a machine can exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Eric and Sonia explained the setup of the test, which involves a human interrogator interacting with both a human and a machine through text-based conversations. If the interrogator cannot reliably distinguish the machine from the human after a series of conversations, the machine is said to have passed the Turing Test.

They discussed the principles behind the test, highlighting that it focuses on observable behavior (linguistic communication) rather than the internal workings of the machine. The Turing Test has been influential but also widely debated. Eric and Sonia presented some of the key criticisms of the test, such as the argument that simulating intelligent conversation does not necessarily imply true understanding or consciousness.

The Chinese Room Argument: Challenging the Turing Test

To further explore the limitations of the Turing Test and the complexities of defining machine intelligence, Eric and Sonia introduced John Searle’s Chinese Room argument, a famous thought experiment proposed in 1980. They described the scenario: a person who does not understand Chinese is locked in a room with a large set of Chinese symbols, a rulebook in English for manipulating these symbols, and incoming batches of Chinese symbols (representing questions). By following the rules in the rulebook, the person can produce outgoing batches of Chinese symbols (representing answers) that are appropriate responses to the incoming questions, making it appear to an outside observer that the person understands Chinese.

Sonia and Eric explained that Searle’s argument is that even if the person in the room can pass the Turing Test for understanding Chinese (by producing seemingly intelligent responses), they do not actually understand Chinese. They are simply manipulating symbols according to rules, without any genuine semantic understanding. The Chinese Room argument is a direct challenge to the idea that passing the Turing Test is a sufficient criterion for claiming a machine possesses true intelligence or understanding. It raises profound questions about the nature of understanding, consciousness, and whether symbolic manipulation alone can give rise to genuine cognitive states.

The talk concluded by emphasizing that the philosophy of AI is a fertile and ongoing area of inquiry with deep connections to various other disciplines, including neuroscience, psychology, linguistics, and computer science. Eric and Sonia encouraged attendees to continue exploring these philosophical questions, recognizing that understanding the fundamental nature of intelligence, both human and artificial, is crucial as we continue to develop increasingly capable machines. The session provided a valuable framework for critically evaluating claims about AI and engaging with the ethical and philosophical implications of artificial intelligence.

Hashtags: #AI #ArtificialIntelligence #Philosophy #TuringTest #ChineseRoom #MachineIntelligence #Consciousness #EricLefevreArdant #SoniaOUCHTAR #PhilosophyOfAI


PostHeaderIcon [DevoxxFR] How to be a Tech Lead in an XXL Pizza Team Without Drowning

The role of a Tech Lead is multifaceted, requiring a blend of technical expertise, mentorship, and facilitation skills. However, these responsibilities become significantly more challenging when leading a large team, humorously dubbed an “XXL pizza team,” potentially comprising fifteen or more individuals, with a substantial number of developers. Damien Beaufils shared his valuable one-year retrospective on navigating this complex role within such a large and diverse team, offering practical insights on how to effectively lead, continue contributing technically, and avoid being overwhelmed.

Damien’s experience was rooted in leading a team working on a public-facing website, notable for its heterogeneity. The team was mixed in terms of skill sets, gender, and composition (combining client and vendor personnel), adding layers of complexity to the leadership challenge.

Balancing Technical Contribution and Leadership

A key tension for many Tech Leads is balancing hands-on coding with leadership duties. Damien addressed this directly, emphasizing that while staying connected to the code is important for credibility and understanding, the primary focus must shift towards enabling the team. He detailed practices put in place to foster collective ownership of the codebase and enhance overall product quality. These included encouraging pair programming, implementing robust code review processes, and establishing clear coding standards.

The goal was to distribute technical knowledge and responsibility across the team rather than concentrating it solely with the Tech Lead. By empowering team members to take ownership and contribute to quality initiatives, Damien found that the team’s overall capability and autonomy increased, allowing him to focus more on strategic technical guidance and facilitation.

Fostering Learning, Progression, and Autonomy

Damien highlighted several successful strategies employed to promote learning, progression, and autonomy within the XXL team. These successes were not achieved by acting as a “super-hero” dictating solutions but through deliberate efforts to facilitate growth. Initiatives such as organizing internal workshops, encouraging knowledge sharing sessions, and providing opportunities for developers to explore new technologies contributed to a culture of continuous learning.

He stressed the importance of the Tech Lead acting as a coach, guiding individuals and the team towards self-improvement and problem-solving. By fostering an environment where team members felt empowered to make technical decisions and learn from both successes and failures, Damien helped build a more resilient and autonomous team. This shift from relying on a single point of technical expertise to distributing knowledge and capability was crucial for managing the scale and diversity of the team effectively.

Challenges and Lessons Learned

Damien was also candid about the problems encountered and the strategies that proved less effective. Leading a large, mixed team inevitably presents communication challenges, potential conflicts, and the difficulty of ensuring consistent application of standards. He discussed the importance of clear communication channels, active listening, and addressing issues proactively.

One crucial lesson learned was the need for clearly defined, measurable indicators to track progress in areas like code quality, team efficiency, and technical debt reduction. Without objective metrics, it’s challenging to assess the effectiveness of implemented practices and demonstrate improvement. Damien concluded that while there’s no magic formula for leading an XXL team, a pragmatic approach focused on empowering the team, fostering a culture of continuous improvement, and using data to inform decisions is essential for success without becoming overwhelmed.

Hashtags: #TechLead #TeamManagement #SoftwareDevelopment #Leadership #DevOps #Agile #DamienBeaufils

PostHeaderIcon [DevoxxFR 2017] Why Your Company Should Store All Its Code in a Single Repo

The strategic decision regarding how an organization structures and manages its source code repositories is far more than a mere technical implementation detail; it is a fundamental architectural choice with profound and wide-ranging implications for development workflow efficiency, team collaboration dynamics, the ease of code sharing and reuse, and the effectiveness of the entire software delivery pipeline, including crucial aspects like Continuous Integration and deployment. The prevailing trend in recent years, particularly amplified by the widespread adoption of microservices architectures and the facilitation of distributed teams, has often leaned towards organizing code into numerous independent repositories (commonly known as the multi-repo approach). In this model, it is typical to have one repository per application, per service, or even per library. However, as Thierry Abaléa highlighted in his concise yet highly insightful and provocative talk at DevoxxFR 2017, some of the most innovative, productive, and successful technology companies in the world, including industry giants like Google, Facebook, and Twitter, operate and maintain their vast and complex codebases within a single, unified repository – a practice referred to as using a monorepo. This striking divergence in practice between the common industry trend and the approach taken by these leading technology companies prompted the central and compelling question of his presentation: what significant advantages, perhaps not immediately obvious, drive these large, successful organizations to embrace and actively maintain a monorepo strategy despite its perceived complexities and challenges, and are these benefits transferable and applicable to other organizations, regardless of their size, industry, or current architectural choices?

Thierry began by acknowledging the intuitive appeal and the perceived simplicity of the multi-repo model, where the organization of source code often appears to naturally mirror the organizational structure of teams or the architectural decomposition of applications into independent services. He conceded that for very small projects or nascent organizations, the multi-repo approach can seem straightforward. However, he sharply contrasted this with the monorepo approach favored by the aforementioned tech giants. He argued that while creating numerous small, independent repositories might seem simpler initially, this perceived simplicity rapidly erodes and can introduce significant friction, overhead, and complexity as the number of services, applications, libraries, and development teams grows within an organization. Managing dependencies between dozens, hundreds, or even thousands of independent repositories, coordinating changes that span across service boundaries, and ensuring consistent tooling, build processes, and deployment pipelines across a highly fragmented codebase become increasingly challenging, time-consuming, and error-prone in a large-scale multi-repo environment.

Unpacking the Compelling and Often Underestimated Advantages of the Monorepo

Thierry articulated several compelling and often underestimated benefits associated with adopting and effectively managing a monorepo strategy. A primary and perhaps the most impactful advantage is the unparalleled ease and efficiency of code sharing and reuse across different projects, applications, and teams within the organization. With all code residing in a single, unified place, developers can readily discover, access, and incorporate libraries, components, or utility functions developed by other teams elsewhere within the company without the friction of adding external dependencies or navigating multiple repositories. This inherent discoverability and accessibility fosters consistency in tooling and practices, reduces redundant effort spent on reinventing common functionalities, and actively promotes the creation and adoption of a shared internal ecosystem of high-quality, reusable code assets.

Furthermore, a monorepo can significantly enhance cross-team collaboration and dramatically facilitate large-scale refactorings and code modifications that span multiple components or services. Changes that affect several different parts of the system residing within the same repository can often be made atomically in a single commit, simplifying the process of coordinating complex updates across different parts of the system and fundamentally reducing the challenges associated with managing version compatibility issues and dependency hell that often plague multi-repo setups. Thierry also highlighted the simplification of dependency and version management; in a monorepo, there is typically a single, unified version of the entire codebase at any given point in time, eliminating the complexities and potential inconsistencies of tracking and synchronizing versions across numerous independent repositories. This unified view simplifies dependency upgrades and helps prevent conflicts arising from incompatible library versions. Finally, he argued that a monorepo inherently facilitates the implementation of a more effective and comprehensive cross-application Continuous Integration (CI) pipeline. Changes committed to the monorepo can easily trigger automated builds and tests for all affected downstream components, applications, and services within the same repository, enabling comprehensive testing of interactions and integrations between different parts of the system before changes are merged into the main development line. This leads to higher confidence in the overall stability and correctness of the entire system.

Addressing Practical Considerations, Challenges, and Potential Drawbacks

While making a strong and persuasive case for the advantages of a monorepo, Thierry also presented a balanced and realistic view by addressing the practical considerations, significant challenges, and potential drawbacks associated with this approach. He acknowledged that managing and scaling the underlying tooling (such as version control systems like Git or Mercurial, build systems like Bazel or Pants, and Continuous Integration infrastructure) to handle a massive monorepo containing millions of lines of code and potentially thousands of developers requires significant investment in infrastructure, tooling development, and specialized expertise. Companies like Google, Facebook, and Microsoft have had to develop highly sophisticated custom solutions or heavily adapt and extend existing open-source tools to manage their enormous repositories efficiently and maintain performance. Thierry noted that contributions from these leading companies back to open-source projects like Git and Mercurial are gradually making monorepo tooling more accessible and performant for other organizations.

He also pointed out that successfully adopting, implementing, and leveraging a monorepo effectively necessitates a strong and mature engineering culture characterized by high levels of transparency, trust, communication, and effective collaboration across different teams and organizational boundaries. If teams operate in silos with poor communication channels and a lack of awareness of work happening elsewhere in the codebase, a monorepo can potentially exacerbate issues related to unintentional breaking changes or conflicting work rather than helping to solve them. Thierry suggested that a full, immediate, “big bang” switch to a monorepo might not be feasible, practical, or advisable for all organizations. A phased or incremental approach, perhaps starting with new projects, consolidating code within a specific department or domain, or gradually migrating related services into a monorepo, could be a more manageable and lower-risk way to transition and build the necessary tooling, processes, and cultural practices over time. The talk provided a nuanced and well-rounded perspective, encouraging organizations to carefully consider the significant potential benefits of a monorepo for improving collaboration, code sharing, and CI efficiency, while being acutely mindful of the required investment in tooling, infrastructure, and, critically, the importance of fostering a collaborative and transparent engineering culture.

Hashtags: #Monorepo #CodeOrganization #EngineeringPractices #ThierryAbalea #SoftwareArchitecture #VersionControl #ContinuousIntegration #Collaboration #Google #Facebook #Twitter #DeveloperProductivity

PostHeaderIcon [DevoxxFR] Kill Your Branches, Do Feature Toggles

For many software development teams, managing feature branches in version control can be a source of significant pain and delays, particularly when branches diverge over long periods, leading to complex and time-consuming merge conflicts. Morgan LEROI proposed an alternative strategy: minimize or eliminate long-lived feature branches in favor of using Feature Toggles. His presentation explored the concepts behind feature toggles, their benefits, and shared practical experience on how this approach can streamline development workflows and enable new capabilities like activating features on demand.

Morgan opened by illustrating the common frustration associated with merging branches that have diverged significantly, describing it as a “traumatic experience”. This pain point underscores the need for development practices that reduce the time code spends in isolation before being integrated.

Embracing Feature Toggles

Feature Toggles, also known as Feature Flags, are a technique that allows developers to enable or disable specific features in an application at runtime, without deploying new code. The core idea is to merge code frequently into the main development branch (e.g., main or master), even if features are not yet complete or ready for production release. The incomplete or experimental features are wrapped in toggles that can be controlled externally.

Morgan explained that this approach addresses the merge hell problem by ensuring code is integrated continuously in small increments, minimizing divergence. It also decouples deployment from release; code containing new features can be deployed to production disabled, and the feature can be “released” or activated later via the toggle when ready.

Practical Benefits and Use Cases

Beyond simplifying merging, Feature Toggles offer several tangible benefits. Morgan highlighted their use by major industry players, including Amazon, demonstrating their effectiveness at scale. A key advantage is the ability to activate new features on demand, for specific user groups, or even for individual users. This enables phased rollouts, A/B testing, and easier rollback if a feature proves problematic.

Morgan detailed the application of feature toggles in A/B testing scenarios. By showing different versions of a feature (or the presence/absence of a feature) to different user segments, teams can collect metrics on user behavior and make data-driven decisions about which version is more effective. This allows for continuous experimentation and optimization based on real-world usage. He suggested that even a simple boolean configuration toggle (if (featureIsEnabled) { ... }) can be a starting point. Morgan encouraged developers to consider feature toggles as a powerful tool for improving development flow, reducing merge pain, and gaining flexibility in releasing new functionality. He challenged attendees to reflect on whether their current branching strategy is serving them well and to consider experimenting with feature toggles. Morgan Leroi is a Staff Software Engineer at Algolia.

Hashtags: #FeatureToggles #BranchingStrategy #ContinuousDelivery #DevOps #SoftwareDevelopment #Agile #MorganLEROI #DevoxxFR2017

PostHeaderIcon [DevoxxFR] An Ultrasonic Adventure!

In the quest for novel ways to enable communication between web pages running on different machines without relying on a central server, Hubert SABLONNIERE embarked on a truly unique and fascinating experiment: using ultrasonic sound emitted and received through web browsers. This adventurous project leveraged modern web audio capabilities to explore an unconventional method of initiating a peer-to-peer connection, pushing the boundaries of what’s possible with web technologies.

Hubert’s journey began with a seemingly simple question that led down an unexpected path. The idea was to use audible (or in this case, inaudible) sound as a signaling mechanism to bootstrap a WebRTC connection, a technology that allows direct browser-to-browser communication.

Signaling with Ultrasound

The core concept involved using the Web Audio API to generate audio signals at frequencies beyond the range of human hearing – ultrasounds. These signals would carry encoded information, acting as a handshake or discovery mechanism. A web page on one machine would emit these ultrasonic signals through the computer’s speakers, and a web page on another nearby machine would attempt to detect and decode them using the Web Audio API and the computer’s microphone.

Once the two pages successfully exchanged the necessary information via ultrasound (such as network addresses or session descriptions), they could then establish a direct WebRTC connection for more robust and higher-bandwidth communication. Hubert’s experiment demonstrated the technical feasibility of this imaginative approach, turning computers into acoustic modems for web pages.

Experimentation and Learning

Hubert emphasized that the project was primarily an “adventure” and an excellent vehicle for learning, rather than necessarily a production-ready solution. Building this ultrasonic communication system provided invaluable hands-on experience with several cutting-edge web technologies, specifically the Web Audio API for generating and analyzing audio and the WebRTC API for peer-to-peer networking.

Personal projects like this, free from the constraints and requirements of production environments, offer a unique opportunity to explore unconventional ideas and deepen understanding of underlying technologies. Hubert shared that the experiment, developed over several nights, allowed him to rapidly learn and experiment with WebRTC and Web Audio in a practical context. While the real-world applicability of using ultrasound for web page communication might be limited by factors like ambient noise, distance, and device microphone/speaker capabilities, the project served as a powerful illustration of creative problem-solving and the potential for unexpected uses of web APIs. Hubert made the project code available on GitHub, encouraging others to explore this ultrasonic frontier and potentially build upon his adventurous experimentation.

Hashtags: #WebAudio #WebRTC #Ultrasound #Experimentation #JavaScript #HubertSablonniere #DevoxxFR2017

PostHeaderIcon [DevoxxFR 2017] Terraform 101: Infrastructure as Code Made Simple

Manually provisioning and managing infrastructure – whether virtual machines, networks, or databases – can be a time-consuming and error-prone process. As applications and their underlying infrastructure become more complex, automating these tasks is essential for efficiency, repeatability, and scalability. Infrastructure as Code (IaC) addresses this by allowing developers and operations teams to define and manage infrastructure using configuration files, applying software development practices like version control, testing, and continuous integration. Terraform, an open-source IaC tool from HashiCorp, has gained significant popularity for its ability to provision infrastructure across various cloud providers and on-premises environments using a declarative language. At Devoxx France 2017, Yannick Lorenzati presented “Terraform 101”, introducing the fundamentals of Terraform and demonstrating how developers can use it to quickly and easily set up the infrastructure they need for development, testing, or demos. His talk provided a practical introduction to IaC with Terraform.

Traditional infrastructure management often involved manual configuration through web consoles or imperative scripts. This approach is prone to inconsistencies, difficult to scale, and lacks transparency and version control. IaC tools like Terraform allow users to define their infrastructure in configuration files using a declarative syntax, specifying the desired state of the environment. Terraform then figures out the necessary steps to achieve that state, automating the provisioning and management process.

Declarative Infrastructure with HashiCorp Configuration Language (HCL)

Yannick Lorenzati introduced the core concept of declarative IaC with Terraform. He would have explained that instead of writing scripts that describe how to set up infrastructure step-by-step (imperative approach), users define what the infrastructure should look like (declarative approach) using HashiCorp Configuration Language (HCL). HCL is a human-readable language designed for creating structured configuration files.

The presentation would have covered the basic building blocks of Terraform configurations written in HCL:

  • Providers: Terraform interacts with various cloud providers (AWS, Azure, Google Cloud, etc.) and other services through providers. Yannick showed how to configure a provider to interact with a specific cloud environment.
  • Resources: Resources are the fundamental units of infrastructure managed by Terraform, such as virtual machines, networks, storage buckets, or databases. He would have demonstrated how to define resources in HCL, specifying their type and desired properties.
  • Variables: Variables allow for parameterizing configurations, making them reusable and adaptable to different environments (development, staging, production). Yannick showed how to define and use variables to avoid hardcoding values in the configuration files.
  • Outputs: Outputs are used to expose important information about the provisioned infrastructure, such as IP addresses or connection strings, which can be used by other parts of an application or by other Terraform configurations.

Yannick Lorenzati emphasized how the declarative nature of HCL simplifies infrastructure management by focusing on the desired end state rather than the steps to get there. He showed how Terraform automatically determines the dependencies between resources and provisions them in the correct order.

Practical Demonstration: From Code to Cloud Infrastructure

The core of the “Terraform 101” talk was a live demonstration showing how a developer can use Terraform to provision infrastructure. Yannick Lorenzati would have guided the audience through writing a simple Terraform configuration file to create a basic infrastructure setup, perhaps including a virtual machine and a network configuration on a cloud provider like AWS (given the mention of AWS Route 53 data source in the transcript).

He would have demonstrated the key Terraform commands:

  • terraform init: Initializes the Terraform working directory and downloads the necessary provider plugins.
  • terraform plan: Generates an execution plan, showing what actions Terraform will take to achieve the desired state without actually making any changes. This step is crucial for reviewing the planned changes before applying them.
  • terraform apply: Executes the plan, provisioning or updating the infrastructure according to the configuration.
  • terraform destroy: Tears down all the infrastructure defined in the configuration, which is particularly useful for cleaning up environments after testing or demos (and saving costs, as mentioned in the transcript).

Yannick showed how Terraform outputs provide useful information after the infrastructure is provisioned. He might have also touched upon using data sources (like the AWS Route 53 data source mentioned) to fetch information about existing infrastructure to be used in the configuration.

The presentation highlighted how integrating Terraform with configuration management tools like Ansible (also mentioned in the description) allows for a complete IaC workflow, where Terraform provisions the infrastructure and Ansible configures the software on it.

Yannick Lorenzati’s “Terraform 101” at Devoxx France 2017 provided a clear and practical introduction to Infrastructure as Code using Terraform. By explaining the fundamental concepts, introducing the HCL syntax, and demonstrating the core workflow with live coding, he empowered developers to start automating their infrastructure provisioning. His talk effectively conveyed how Terraform can save time, improve consistency, and enable developers to quickly set up the environments they need, ultimately making them more productive.

Hashtags: #Terraform #IaC #InfrastructureAsCode #HashiCorp #DevOps #CloudComputing #Automation #YannickLorenzati

PostHeaderIcon (long tweet) When ‘filter’ does not work with Primefaces’ datatable

Abstract

Sometimes, the filter function in Primefaces <p:datatable/> does not work when the field on which filtering is operated typed as an enum.

Explanation

Actually, in order to filter, Primefaces relies on a direct '=' comparison. The hack to fix this issue is to force Primefaces to compare on the enum name, and not by a reference check.

Quick fix

In the enum class, add the following block:

[java]public String getName(){ return name(); }[/java]

Have the datatable declaration to look like:

[xml]<p:dataTable id="castorsDT" var="castor" value="#{managedCastorListManagedBean.initiatedCastors}" widgetVar="castorsTable" filteredValue="#{managedCastorListManagedBean.filteredCastors}">
[/xml]

Declare the enum-filtered column lke this:

[xml]<p:column sortBy="#{castor.castorWorkflowStatus}" filterable="true" filterBy="#{castor.castorWorkflowStatus.name}" filterMatchMode="in">
<f:facet name="filter">
<p:selectCheckboxMenu label="#{messages[‘status’]}" onchange="PF(‘castorsTable’).filter()">

<f:selectItems value="#{transverseManagedBean.allCastorWorkflowStatuses}" var="cws" itemLabel="#{cws.name}" itemValue="#{cws.name}"/>
</p:selectCheckboxMenu>
</f:facet>
</p:column>[/xml]

Notice how the filtering attribute is declared:

[xml]filterable="true" filterBy="#{castor.castorWorkflowStatus.name}" filterMatchMode="in"[/xml]

In other terms, the comparison is forced the rely on equals() of class String, through the calls to getName() and name().

PostHeaderIcon Jonathan LALOU recommends… Stephane TORTAJADA

I wrote the following notice on Stephane TORTAJADA‘s profile on LinkedIn:

I had reported to Stephane for one year. I recommend Stephane for his management, that is based on a few principles:
* understand personally technical and functional problems
* allow team mates to make errors, in order to learn from them
* protect team mates from external aggressions and impediments
* escalating up and down the relevant information

PostHeaderIcon Jonathan LALOU recommends… Ahmed CHAARI

I wrote the following notice on Ahmed CHAARI‘s profile on LinkedIn:

Ahmed has shown his ability to absorb and learn complex technologies, as well as improve his skills in a short time and a not-so-easy environment. This is why I consider Ahmed as a software engineer to recommend for any Java-focused team.