Archive for the ‘General’ Category
[DevoxxFR 2018] Deploying Microservices on AWS: Compute Options Explored at Devoxx France 2018
At Devoxx France 2018, Arun Gupta and Tiffany Jernigan, both from Amazon Web Services (AWS), delivered a three-hour deep-dive session titled Compute options for Microservices on AWS. This hands-on tutorial explored deploying a microservices-based application using various AWS compute options: EC2, Amazon Elastic Container Service (ECS), AWS Fargate, Elastic Kubernetes Service (EKS), and AWS Lambda. Through a sample application with web app, greeting, and name microservices, they demonstrated local testing, deployment pipelines, service discovery, monitoring, and canary deployments. The session, rich with code demos, is available on YouTube, with code and slides on GitHub.
Microservices: Solving Business Problems
Arun Gupta opened by addressing the monolith vs. microservices debate, emphasizing that the choice depends on business needs. Microservices enable agility, frequent releases, and polyglot environments but introduce complexity. AWS simplifies this with managed services, allowing developers to focus on business logic. The demo application featured three microservices: a public-facing web app, and internal greeting and name services, communicating via REST endpoints. Built with WildFly Swarm, a Java EE-compliant server, the application produced a portable fat JAR, deployable as a container or Lambda function. The presenters highlighted service discovery, ensuring the web app could locate stateless instances of greeting and name services.
EC2: Full Control for Traditional Deployments
Amazon EC2 offers developers complete control over virtual machines, ideal for those needing to manage the full stack. The presenters deployed the microservices on EC2 instances, running WildFly Swarm JARs. Using Maven and a Docker profile, they generated container images, pushed to Docker Hub, and tested locally with Docker Compose. A docker stack deploy command spun up the services, accessible via curl localhost:8080, returning responses like “hello Sheldon.” EC2 requires manual scaling and cluster management, but its flexibility suits custom stacks. The GitHub repo includes configurations for EC2 deployments, showcasing integration with AWS services like CloudWatch for logging.
Amazon ECS: Orchestrating Containers
Amazon ECS simplifies container orchestration, managing scheduling and scaling. The presenters created an ECS cluster in the AWS Management Console, defining task definitions for the three microservices. Task definitions specified container images, CPU, and memory, with an Application Load Balancer (ALB) enabling path-based routing (e.g., /resources/greeting). Using the ECS CLI, they deployed services, ensuring high availability across multiple availability zones. CloudWatch integration provided metrics and logs, with alarms for monitoring. ECS reduces operational overhead compared to EC2, balancing control and automation. The session highlighted ECS’s deep integration with AWS services, streamlining production workloads.
AWS Fargate: Serverless Containers
Introduced at re:Invent 2017, AWS Fargate abstracts server management, allowing developers to focus on containers. The presenters deployed the same microservices using Fargate, specifying task definitions with AWS VPC networking for fine-grained security. The Fargate CLI, a GitHub project by AWS’s John Pignata, simplified setup, creating ALBs and task definitions automatically. A curl to the load balancer URL returned responses like “howdy Penny.” Fargate’s per-second billing and task-level resource allocation optimize costs. Available initially in US East (N. Virginia), Fargate suits developers prioritizing simplicity. The session emphasized its role in reducing infrastructure management.
Elastic Kubernetes Service (EKS): Kubernetes on AWS
EKS, in preview during the session, brings managed Kubernetes to AWS. The presenters deployed the microservices on an EKS cluster, using kubectl to manage pods and services. They introduced Istio, a service mesh, to handle traffic routing and observability. Istio’s sidecar containers enabled 50/50 traffic splits between “hello” and “howdy” versions of the greeting service, configured via YAML manifests. Chaos engineering was demonstrated by injecting 5-second delays in 10% of requests, testing resilience. AWS X-Ray, integrated via a daemon set, provided service maps and traces, identifying bottlenecks. EKS, later supporting Fargate, offers flexibility for Kubernetes users. The GitHub repo includes EKS manifests and Istio configurations.
AWS Lambda: Serverless Microservices
AWS Lambda enables serverless deployments, eliminating server management. The presenters repurposed the WildFly Swarm application for Lambda, using the Serverless Application Model (SAM). Each microservice became a Lambda function, fronted by API Gateway endpoints (e.g., /greeting). SAM templates defined functions, APIs, and DynamoDB tables, with sam local start-api testing endpoints locally via Dockerized Lambda runtimes. Responses like “howdy Sheldon” were verified with curl localhost:3000. SAM’s package and deploy commands uploaded functions to S3, while canary deployments shifted traffic (e.g., 10% to new versions) with CloudWatch alarms. Lambda’s per-second billing and 300-second execution limit suit event-driven workloads. The session showcased SAM’s integration with AWS services and the Serverless Application Repository.
Deployment Pipelines: Automating with AWS CodePipeline
The presenters built a deployment pipeline using AWS CodePipeline, a managed service inspired by Amazon’s internal tooling. A GitHub push triggered the pipeline, which used CodeCommit to build Docker images, pushed them to Amazon Elastic Container Registry (ECR), and deployed to an ECS cluster. For Lambda, SAM templates were packaged and deployed. CloudFormation templates automated resource creation, including VPCs, subnets, and ALBs. The pipeline ensured immutable deployments with commit-based image tags, maintaining production stability. The GitHub repo provides CloudFormation scripts, enabling reproducible environments. This approach minimizes manual intervention, supporting rapid iteration.
Monitoring and Logging: AWS X-Ray and CloudWatch
Monitoring was a key focus, with AWS X-Ray providing end-to-end tracing. In ECS and EKS, X-Ray daemons collected traces, generating service maps showing web app, greeting, and name interactions. For Lambda, X-Ray was enabled natively via SAM templates. CloudWatch offered metrics (e.g., CPU usage) and logs, with alarms for thresholds. In EKS, Kubernetes tools like Prometheus and Grafana were mentioned, but X-Ray’s integration with AWS services was emphasized. The presenters demonstrated debugging Lambda functions locally using SAM CLI and IntelliJ, enhancing developer agility. These tools ensure observability, critical for distributed microservices.
Choosing the Right Compute Option
The session concluded by comparing compute options. EC2 offers maximum control but requires managing scaling and updates. ECS balances automation and flexibility, ideal for containerized workloads. Fargate eliminates server management, suiting simple deployments. EKS caters to Kubernetes users, with Istio enhancing observability. Lambda, best for event-driven microservices, minimizes operational overhead but has execution limits. Factors like team expertise, application requirements, and cost influence the choice. The presenters encouraged feedback via GitHub issues to shape AWS’s roadmap. Visit aws.amazon.com/containers for more.
Links:
Hashtags: #AWS #Microservices #ECS #Fargate #EKS #Lambda #DevoxxFR2018 #ArunGupta #TiffanyJernigan #CloudComputing
[KotlinConf2017] Bootiful Kotlin
Lecturer
Josh Long is the Spring Developer Advocate at Pivotal, a leading figure in the Java ecosystem, and a Java Champion. Author of five books, including Cloud Native Java, and three best-selling video trainings, Josh is a prolific open-source contributor to projects like Spring Boot, Spring Integration, and Spring Cloud. A passionate advocate for Kotlin, he collaborates with the Spring and Kotlin teams to enhance their integration, promoting productive, modern development practices for JVM-based applications.
Abstract
Spring Boot’s convention-over-configuration approach revolutionizes JVM application development, and its integration with Kotlin enhances developer productivity. This article analyzes Josh Long’s presentation at KotlinConf 2017, which explores the synergy between Spring Boot and Kotlin for building robust, production-ready applications. It examines the context of Spring’s evolution, the methodology of leveraging Kotlin’s features with Spring Boot, key integrations like DSLs and reactive programming, and the implications for rapid, safe development. Josh’s insights highlight how Kotlin elevates Spring Boot’s elegance, streamlining modern application development.
Context of Spring Boot and Kotlin Integration
At KotlinConf 2017, Josh Long presented the integration of Spring Boot and Kotlin as a transformative approach to JVM development. Spring Boot, developed by Pivotal, simplifies Spring’s flexibility with sensible defaults, addressing functional and non-functional requirements for production-ready applications. Kotlin’s rise as a concise, type-safe language, endorsed by Google for Android in 2017, aligned perfectly with Spring Boot’s goals of reducing boilerplate and enhancing developer experience. Josh, a Spring advocate and Kotlin enthusiast, showcased how their collaboration creates a seamless, elegant development process.
The context of Josh’s talk reflects the growing demand for efficient, scalable frameworks in enterprise and cloud-native applications. Spring Boot’s ability to handle microservices, REST APIs, and reactive systems made it a popular choice, but its Java-centric syntax could be verbose. Kotlin’s concise syntax and modern features, such as null safety and extension functions, complement Spring Boot, reducing complexity and enhancing readability. Josh’s presentation aimed to demonstrate this synergy, appealing to developers seeking to accelerate development while maintaining robustness.
Methodology of Spring Boot with Kotlin
Josh’s methodology focused on integrating Kotlin’s features with Spring Boot to streamline application development. He demonstrated using Kotlin’s concise syntax to define Spring components, such as REST controllers and beans, reducing boilerplate compared to Java. For example, Kotlin’s data classes simplify entity definitions, automatically providing getters, setters, and toString methods, which align with Spring Boot’s convention-driven approach. Josh showcased live examples of building REST APIs, where Kotlin’s null safety ensures robust handling of optional parameters.
A key innovation was the use of Kotlin’s DSLs for Spring Boot configurations, such as routing for REST endpoints. These DSLs provide a declarative syntax, allowing developers to define routes and handlers in a single, readable block, with IDE auto-completion enhancing productivity. Josh also highlighted Kotlin’s support for reactive programming with Spring WebFlux, enabling non-blocking, scalable applications. This methodology leverages Kotlin’s interoperability with Java, ensuring seamless integration with Spring’s ecosystem while enhancing developer experience.
Key Integrations and Features
Josh emphasized several key integrations that make Spring Boot and Kotlin a powerful combination. Kotlin’s DSLs for Spring Integration and Spring Cloud Gateway simplify the configuration of message-driven and API gateway systems, respectively. These DSLs consolidate routing logic into concise, expressive code, reducing errors and improving maintainability. For example, Josh demonstrated a gateway configuration where routes and handlers were defined in a single Kotlin DSL, leveraging the compiler’s auto-completion to ensure correctness.
Reactive programming was another focal point, with Kotlin’s coroutines integrating seamlessly with Spring WebFlux to handle asynchronous, high-throughput workloads. Josh showcased how coroutines simplify reactive code, making it more readable than Java’s callback-based alternatives. Additionally, Kotlin’s extension functions enhance Spring’s APIs, allowing developers to add custom behavior without modifying core classes. These integrations highlight Kotlin’s ability to elevate Spring Boot’s functionality, making it ideal for modern, cloud-native applications.
Implications for Application Development
The integration of Spring Boot and Kotlin, as presented by Josh, has profound implications for JVM development. By combining Spring Boot’s rapid development capabilities with Kotlin’s concise, safe syntax, developers can build production-ready applications faster and with fewer errors. The use of DSLs and reactive programming supports scalable, cloud-native architectures, critical for microservices and high-traffic systems. This synergy is particularly valuable for enterprises adopting Spring for backend services, where Kotlin’s features reduce development time and maintenance costs.
For the broader ecosystem, Josh’s presentation underscores the collaborative efforts between the Spring and Kotlin teams, ensuring a first-class experience for developers. The emphasis on community engagement, through Q&A and references to related talks, fosters a collaborative environment for refining these integrations. As Kotlin gains traction in server-side development, its partnership with Spring Boot positions it as a leading choice for building robust, modern applications, challenging Java’s dominance while leveraging its ecosystem.
Conclusion
Josh Long’s presentation at KotlinConf 2017 highlighted the transformative synergy between Spring Boot and Kotlin, combining rapid development with elegant, type-safe code. The methodology’s focus on DSLs, reactive programming, and seamless integration showcases Kotlin’s ability to enhance Spring Boot’s productivity and scalability. By addressing modern development needs, from REST APIs to cloud-native systems, this integration empowers developers to build robust applications efficiently. As Spring and Kotlin continue to evolve, their partnership promises to shape the future of JVM development, fostering innovation and developer satisfaction.
Links
[KotlinConf2017] My Transition from Swift to Kotlin
Lecturer
Hector Matos is a senior iOS developer at Twitter, with extensive experience in Swift and a growing expertise in Kotlin for Android development. Raised in Texas, Hector maintains a technical blog at KrakenDev.io, attracting nearly 10,000 weekly views, and has spoken internationally on iOS and Swift across three continents. His passion for mobile UI/UX drives his work on high-quality applications, and his transition from Swift to Kotlin reflects his commitment to exploring cross-platform development solutions.
Abstract
The similarities between Swift and Kotlin offer a unique opportunity to unify mobile development communities. This article analyzes Hector Matos’s presentation at KotlinConf 2017, which details his transition from Swift to Kotlin and compares their features. It explores the context of cross-platform mobile development, the methodology of comparing language constructs, key differences in exception handling and extensions, and the implications for fostering collaboration between iOS and Android developers. Hector’s insights highlight Kotlin’s potential to bridge divides, enhancing productivity across mobile ecosystems.
Context of Cross-Platform Mobile Development
At KotlinConf 2017, Hector Matos shared his journey from being a dedicated Swift developer to embracing Kotlin, challenging his initial perception of Android as “the dark side.” As a senior iOS developer at Twitter, Hector’s expertise in Swift, a language designed for iOS, provided a strong foundation for evaluating Kotlin’s capabilities in Android development. The context of his talk reflects the growing need for cross-platform solutions in mobile development, where developers seek to leverage skills across iOS and Android to reduce fragmentation and improve efficiency.
Kotlin’s rise, particularly after Google’s 2017 endorsement for Android, positioned it as a counterpart to Swift, with both languages emphasizing type safety and modern syntax. Hector’s presentation aimed to bridge the divide between these communities, highlighting similarities that enable developers to transition seamlessly while addressing differences that impact development workflows. His personal narrative, rooted in a passion for UI/UX, underscored the potential for Kotlin and Swift to unify mobile development practices, fostering collaboration in a divided ecosystem.
Methodology of Language Comparison
Hector’s methodology involved a detailed comparison of Swift and Kotlin, focusing on their shared strengths and distinct features. Both languages offer type-safe, concise syntax, reducing boilerplate and enhancing readability. Hector demonstrated how Kotlin’s interfaces with default implementations mirror Swift’s protocol extensions, allowing developers to provide default behavior for functions. For example, Kotlin enables defining function bodies within interface declarations, similar to Swift’s ability to extend protocols, streamlining code reuse and modularity.
He also explored structural similarities, such as both languages’ support for functional programming constructs like map and filter. Hector’s approach included live examples, showcasing how common tasks, such as data transformations, are implemented similarly in both languages. By comparing code snippets, he illustrated how developers familiar with Swift can quickly adapt to Kotlin, leveraging familiar paradigms to build Android applications with minimal learning overhead.
Key Differences and Exception Handling
Despite their similarities, Hector highlighted critical differences, particularly in exception handling. Swift treats exceptions as first-class citizens, using a do-try-catch construct that allows multiple try statements within a single block, enabling fine-grained error handling without nested blocks. Kotlin, inheriting Java’s approach, relies on traditional try-catch blocks, which Hector noted can feel less elegant due to potential nesting. This difference impacts developer experience, with Swift offering a more streamlined approach for handling errors in complex workflows.
Another distinction lies in Kotlin’s handling of extensions, which are declared in separate files without requiring curly braces, unlike Swift’s protocol extensions. This syntactic difference enhances readability in Kotlin, allowing developers to organize extensions cleanly. Hector’s analysis emphasized that while both languages achieve similar outcomes, these differences influence code organization and error management strategies, requiring developers to adapt their mental models when transitioning between platforms.
Implications for Mobile Development
Hector’s presentation has significant implications for mobile development, particularly in fostering collaboration between iOS and Android communities. By highlighting Swift and Kotlin’s similarities, he demonstrated that developers can leverage existing skills to work across platforms, reducing the learning curve and enabling cross-platform projects. This unification is critical for companies like Twitter, where consistent UI/UX across iOS and Android is paramount, and Kotlin’s interoperability with Java ensures seamless integration with existing Android ecosystems.
The broader implication is the potential for a unified mobile development culture. Hector’s call for community engagement, evidenced by his interactive Q&A, encourages developers to share knowledge and explore both languages. As Kotlin and Swift continue to evolve, their shared design philosophies could lead to standardized tools and practices, enhancing productivity and reducing fragmentation. For developers, this transition opens opportunities to work on diverse projects, while for the industry, it promotes innovation in mobile application development.
Conclusion
Hector Matos’s presentation at KotlinConf 2017 offered a compelling case for bridging the Swift and Kotlin communities through their shared strengths. By comparing their syntax, exception handling, and extension mechanisms, Hector illuminated Kotlin’s potential to attract Swift developers to Android. The methodology’s focus on practical examples and community engagement underscores the feasibility of cross-platform expertise. As mobile development demands increase, Hector’s insights pave the way for a unified approach, leveraging Kotlin’s and Swift’s modern features to create robust, user-focused applications.
Links
[KotlinConf2017] Building Languages Using Kotlin
Lecturer
Federico Tomassetti is an independent software architect specializing in language engineering, with expertise in designing languages, parsers, compilers, and editors. Holding a Ph.D., Federico has worked across Europe for companies like TripAdvisor and Groupon, and now collaborates remotely with global organizations. His focus on Domain Specific Languages (DSLs) and language tooling leverages Kotlin’s capabilities to streamline development, making him a leading figure in creating accessible, pragmatic programming languages.
Abstract
Domain Specific Languages (DSLs) enhance developer productivity by providing tailored syntax for specific problem domains. This article analyzes Federico Tomassetti’s presentation at KotlinConf 2017, which explores building DSLs using Kotlin’s concise syntax and metaprogramming capabilities. It examines the context of language engineering, the methodology for creating DSLs, the role of tools like ANTLR, and the implications for making language development economically viable. Federico’s pragmatic approach demonstrates how Kotlin reduces the complexity of building languages, enabling developers to create efficient, domain-focused tools with practical applications.
Context of Language Engineering
At KotlinConf 2017, held in San Francisco from November 1–3, 2017, Federico Tomassetti addressed the growing importance of language engineering in software development. Languages, as tools that shape productivity, require ecosystems of compilers, editors, and parsers, traditionally demanding significant effort to develop. Kotlin’s emergence as a concise, interoperable language for the JVM offered a new opportunity to streamline this process. Federico, a language engineer with experience at major companies, highlighted how Kotlin’s features make DSL development accessible, even for smaller projects where resource constraints previously limited such endeavors.
The context of Federico’s presentation reflects the shift toward specialized languages that address specific domains, such as financial modeling or configuration management. DSLs simplify complex tasks by providing intuitive syntax, but their development was historically costly. Kotlin’s metaprogramming and type-safe features reduce this barrier, enabling developers to create tailored languages efficiently. Federico’s talk aimed to demystify the process, offering a general framework for building DSLs and evaluating their effort, appealing to developers seeking to enhance productivity through custom tools.
Methodology for Building DSLs
Federico’s methodology for building DSLs with Kotlin centers on a structured process encompassing grammar definition, parsing, and editor integration. He advocated using ANTLR, a powerful parser generator, to define the grammar of a DSL declaratively. ANTLR’s ability to generate parsers for multiple languages, including JavaScript for browser-based applications, simplifies cross-platform development. Federico demonstrated how ANTLR handles operator precedence automatically, reducing the complexity of grammar rules and producing simpler, maintainable parsers compared to handwritten alternatives.
Kotlin’s role in this methodology is twofold: its concise syntax streamlines the implementation of parsers and compilers, while its metaprogramming capabilities, such as type-safe builders, facilitate the creation of intuitive DSL syntax. Federico showcased a custom framework, Canvas, to build editors, abstracting common functionality to reduce development time. Errors are collected during validation and displayed collectively in the editor, ensuring comprehensive feedback for syntax and semantic issues. This approach leverages Kotlin’s interoperability to integrate DSLs with existing systems, enhancing their practicality.
Practical Applications and Tools
The practical applications of Federico’s approach lie in creating DSLs that address specific business needs, such as configuration languages or data processing scripts. By using Kotlin, developers can build lightweight, domain-focused languages that integrate seamlessly with JVM-based applications. Federico’s use of ANTLR for parsing supports auto-completion in editors, enhancing the developer experience. His Canvas framework, tailored for editor development, demonstrates how reusable components can accelerate the creation of language ecosystems, making DSLs viable for projects with limited resources.
The methodology’s emphasis on declarative grammar definition with ANTLR ensures portability across platforms, such as generating JavaScript parsers for web-based DSLs. Federico’s approach to error handling, collecting and displaying all errors simultaneously, improves usability by providing clear feedback. These tools and techniques make DSL development accessible, enabling developers to create specialized languages that enhance productivity in domains like finance, engineering, or automation, where tailored syntax can simplify complex tasks.
Implications for Software Development
Federico’s presentation underscores Kotlin’s transformative potential in language engineering. By reducing the effort required to build DSLs, Kotlin democratizes language development, making it feasible for smaller teams or projects. The use of ANTLR and custom frameworks like Canvas lowers the technical barrier, allowing developers to focus on domain-specific requirements rather than infrastructure. This has significant implications for industries where custom languages can streamline workflows, from data analysis to system configuration.
For the broader software ecosystem, Federico’s approach highlights Kotlin’s versatility beyond traditional application development. Its metaprogramming capabilities position it as a powerful tool for creating developer-friendly languages, challenging the dominance of general-purpose languages in specialized domains. The emphasis on community feedback, as evidenced by Federico’s engagement with audience questions, ensures that DSL development evolves with practical needs, fostering a collaborative ecosystem. As Kotlin’s adoption grows, its role in language engineering could redefine how developers approach domain-specific challenges.
Conclusion
Federico Tomassetti’s presentation at KotlinConf 2017 illuminated the potential of Kotlin for building Domain Specific Languages, leveraging its concise syntax and metaprogramming capabilities to streamline language engineering. The methodology, combining ANTLR for parsing and custom frameworks for editor development, offers a pragmatic approach to creating efficient, domain-focused languages. By reducing the cost and complexity of DSL development, Kotlin enables developers to craft tools that enhance productivity across diverse domains. Federico’s insights position Kotlin as a catalyst for innovation in language engineering, with lasting implications for software development.
Links
[KotlinConf2017] Kotlin Static Analysis with Android Lint
Lecturer
Tor Norbye is the technical lead for Android Studio at Google, where he has driven the development of numerous IDE features, including Android Lint, a static code analysis tool. With a deep background in software development and tooling, Tor is the primary author of Android Lint, which integrates with Android Studio, IntelliJ IDEA, and Gradle to enhance code quality. His expertise in static analysis and IDE development has made significant contributions to the Android ecosystem, supporting developers in building robust applications.
Abstract
Static code analysis is critical for ensuring the reliability and quality of Android applications. This article analyzes Tor Norbye’s presentation at KotlinConf 2017, which explores Android Lint’s support for Kotlin and its capabilities for custom lint checks. It examines the context of static analysis in Android development, the methodology of leveraging Lint’s Universal Abstract Syntax Tree (UAST) for Kotlin, the implementation of custom checks, and the implications for enhancing code quality. Tor’s insights highlight how Android Lint empowers developers to enforce best practices and maintain robust Kotlin-based applications.
Context of Static Analysis in Android
At KotlinConf 2017, Tor Norbye presented Android Lint as a cornerstone of code quality in Android development, particularly with the rise of Kotlin as a first-class language. Introduced in 2011, Android Lint is an open-source static analyzer integrated into Android Studio, IntelliJ IDEA, and Gradle, offering over 315 checks to identify bugs without executing code. As Kotlin gained traction in 2017, ensuring its compatibility with Lint became essential to support Android developers transitioning from Java. Tor’s presentation addressed this need, focusing on Lint’s ability to analyze Kotlin code and extend its functionality through custom checks.
The context of Tor’s talk reflects the challenges of maintaining code quality in dynamic, large-scale Android projects. Static analysis mitigates issues like null pointer exceptions, resource leaks, and API misuse, which are critical in mobile development where performance and reliability are paramount. By supporting Kotlin, Lint enables developers to leverage the language’s type-safe features while ensuring adherence to Android best practices, fostering a robust development ecosystem.
Methodology of Android Lint with Kotlin
Tor’s methodology centers on Android Lint’s use of the Universal Abstract Syntax Tree (UAST) to analyze Kotlin code. UAST provides a unified representation of code across Java and Kotlin, enabling Lint to apply checks consistently. Tor explained how Lint examines code statically, identifying potential bugs like incorrect API usage or performance issues without runtime execution. The tool’s philosophy prioritizes caution, surfacing potential issues even if they risk false positives, with suppression mechanisms to dismiss irrelevant warnings.
A key focus was custom lint checks, which allow developers to extend Lint’s functionality for library-specific rules. Tor demonstrated writing a custom check for Kotlin, leveraging UAST to inspect code structures and implement quickfixes that integrate with the IDE. For example, a check might enforce proper usage of a library’s API, offering automated corrections via code completion. This methodology ensures that developers can tailor Lint to project-specific needs, enhancing code quality and maintainability in Kotlin-based Android applications.
Implementing Custom Lint Checks
Implementing custom lint checks involves defining rules that analyze UAST nodes to detect issues and provide fixes. Tor showcased a practical example, creating a check to validate Kotlin code patterns, such as ensuring proper handling of nullable types. The process involves registering checks with Lint’s infrastructure, which loads them dynamically from libraries. These checks can inspect method calls, variable declarations, or other code constructs, flagging violations and suggesting corrections that appear in Android Studio’s UI.
Tor emphasized the importance of clean APIs for custom checks, noting plans to enhance Lint’s configurability with an options API. This would allow developers to customize check parameters (e.g., string patterns or ranges) directly from build.gradle or IDE interfaces, simplifying configuration. The methodology’s integration with Gradle and IntelliJ ensures seamless adoption, enabling developers to enforce project-specific standards without relying on external tools or complex setups.
Future Directions and Community Engagement
Tor outlined future enhancements for Android Lint, including improved support for Kotlin script files (.kts) in Gradle builds and advanced call graph analysis for whole-program insights. These improvements aim to address limitations in current checks, such as incomplete Gradle file support, and enhance Lint’s ability to perform comprehensive static analysis. Plans to transition from Java-centric APIs to UAST-focused ones promise a more stable, Kotlin-friendly interface, reducing compatibility issues and simplifying check development.
Community engagement is a cornerstone of Lint’s evolution. Tor encouraged developers to contribute checks to the open-source project, sharing benefits with the broader Android community. The emphasis on community-driven development ensures that Lint evolves to meet real-world needs, from small-scale apps to enterprise projects. By fostering collaboration, Tor’s vision positions Lint as a vital tool for maintaining code quality in Kotlin’s growing ecosystem.
Conclusion
Tor Norbye’s presentation at KotlinConf 2017 highlighted Android Lint’s pivotal role in ensuring code quality for Kotlin-based Android applications. By leveraging UAST for static analysis and supporting custom lint checks, Lint empowers developers to enforce best practices and adapt to project-specific requirements. The methodology’s integration with Android Studio and Gradle, coupled with plans for enhanced configurability and community contributions, strengthens Kotlin’s appeal in Android development. As Kotlin continues to shape the Android ecosystem, Lint’s innovations ensure robust, reliable applications, reinforcing its importance in modern software development.
Links
[KotlinConf2017] Understand Every Line of Your Codebase
Lecturer
Victoria Gonda is a software developer at Collective Idea, specializing in Android and web applications with a focus on improving user experiences through technology. With a background in Computer Science and Dance, Victoria combines technical expertise with creative problem-solving, contributing to projects that enhance accessibility and engagement. Boris Farber is a Senior Partner Engineer at Google, focusing on Android binary analysis and performance optimization. As the lead of ClassyShark, an open-source tool for browsing Android and Java executables, Boris brings deep expertise in understanding compiled code.
- Collective Idea company website
- Google company website
- Victoria Gonda on LinkedIn
- Boris Farber on LinkedIn
Abstract
Understanding the compiled output of Kotlin code is essential for optimizing performance and debugging complex applications. This article analyzes Victoria Gonda and Boris Farber’s presentation at KotlinConf 2017, which explores how Kotlin and Java code compiles to class files and introduces tools for inspecting compiled code. It examines the context of Kotlin’s compilation pipeline, the methodology of analyzing bytecode, the use of inspection tools like ClassyShark, and the implications for developers seeking deeper insights into their codebases. The analysis highlights how these tools empower developers to make informed optimization decisions.
Context of Kotlin’s Compilation Pipeline
At KotlinConf 2017, Victoria Gonda and Boris Farber addressed the challenge of understanding Kotlin’s compiled output, a critical aspect for developers transitioning from Java or optimizing performance-critical applications. Kotlin’s concise and expressive syntax, while enhancing productivity, raises questions about its compiled form, particularly when compared to Java. As Kotlin gained traction in Android and server-side development, developers needed tools to inspect how their code translates to bytecode, ensuring performance and compatibility with JVM-based systems.
Victoria and Boris’s presentation provided a timely exploration of Kotlin’s build pipeline, focusing on its similarities and differences with Java. By demystifying the compilation process, they aimed to equip developers with the knowledge to analyze and optimize their code. The context of their talk reflects Kotlin’s growing adoption and the need for transparency in how its features, such as lambdas and inline functions, impact compiled output, particularly in performance-sensitive scenarios like Android’s drawing loops or database operations.
Methodology of Bytecode Analysis
The methodology presented by Victoria and Boris centers on dissecting Kotlin’s compilation to class files, using tools like ClassyShark to inspect bytecode. They explained how Kotlin’s compiler transforms high-level constructs, such as lambdas and inline functions, into JVM-compatible bytecode. Inline functions, for instance, copy their code directly into the call site, reducing overhead but potentially increasing code size. The presenters demonstrated decompiling class files to reveal metadata used by the Kotlin runtime, such as type information for generics, providing insights into how Kotlin maintains type safety at runtime.
ClassyShark, led by Boris, serves as a key tool for this analysis, allowing developers to browse Android and Java executables and understand their structure. The methodology involves writing Kotlin code, compiling it, and inspecting the resulting class files to identify performance implications, such as method count increases from lambdas. Victoria and Boris emphasized a pragmatic approach: analyze code before optimizing, ensuring that performance tweaks target actual bottlenecks rather than speculative issues, particularly in mission-critical contexts like Android rendering.
Practical Applications and Optimization
The practical applications of bytecode analysis lie in optimizing performance and debugging complex issues. Victoria and Boris showcased how tools like ClassyShark reveal the impact of Kotlin’s features, such as inline functions adding methods to class files. For Android developers, this is critical, as method count limits can affect app size and performance. By inspecting decompiled classes, developers can identify unnecessary object allocations or inefficient constructs, optimizing code for scenarios like drawing loops or database operations where performance is paramount.
The presenters also addressed the trade-offs of inline functions, noting that while they reduce call overhead, excessive use can inflate code size. Their methodology encourages developers to test performance impacts before applying optimizations, using tools to measure method counts and object allocations. This approach ensures that optimizations are data-driven, avoiding premature changes that may not yield significant benefits. The open-source nature of ClassyShark further enables developers to customize their analysis, tailoring inspections to specific project needs.
Implications for Developers
The insights from Victoria and Boris’s presentation have significant implications for Kotlin developers. Understanding the compiled output of Kotlin code empowers developers to make informed decisions about performance and compatibility, particularly in Android development where resource constraints are critical. Tools like ClassyShark democratize bytecode analysis, enabling developers to debug issues that arise from complex features like generics or lambdas. This transparency fosters confidence in adopting Kotlin for performance-sensitive applications, bridging the gap between its high-level syntax and low-level execution.
For the broader Kotlin ecosystem, the presentation underscores the importance of tooling in supporting the language’s growth. By providing accessible methods to inspect and optimize code, Victoria and Boris contribute to a culture of informed development, encouraging developers to explore Kotlin’s internals without fear of hidden costs. Their emphasis on community engagement, through questions and open-source tools, ensures that these insights evolve with developer feedback, strengthening Kotlin’s position as a reliable, performance-oriented language.
Conclusion
Victoria Gonda and Boris Farber’s presentation at KotlinConf 2017 provided a comprehensive guide to understanding Kotlin’s compiled output, leveraging tools like ClassyShark to demystify the build pipeline. By analyzing bytecode and addressing optimization trade-offs, they empowered developers to make data-driven decisions for performance-critical applications. The methodology’s focus on practical analysis and accessible tooling enhances Kotlin’s appeal, particularly for Android developers navigating resource constraints. As Kotlin’s adoption grows, such insights ensure that developers can harness its expressive power while maintaining control over performance and compatibility.
Links
[DotSecurity2017] Names and Security
Amid the internet’s inexorable expansion, where identities intermingle in a vast virtual bazaar, nomenclature emerges not merely as label but as linchpin of legitimacy and liability. Paul Mockapetris, the visionary architect of the Domain Name System (DNS), unraveled this nexus at dotSecurity 2017, positing names as the nascent nucleus of network nativity—superseding addresses in an era of fluid federation. From USC’s Information Sciences Institute to ThreatSTOP’s chief scientific stewardship, Paul’s provenance—DNS’s 1983 inception—imbues his insights with unparalleled pedigree, transforming arcane protocols into actionable armaments against cyber malfeasance.
Paul’s preamble pulsed with principles: network’s nectar in connectivity’s cornucopia, generative genius in repurposing relics—DNS’s domain, a namespace nexus granting granular governance amid global glue. Scaling’s saga: hierarchical hierarchies, root’s realm radiating to TLDs’ tributaries—federation’s finesse in delegation’s dance, authorities autonomous yet amalgamated. Security’s shadow: names’ nobility invites nefariousness—phishing’s phalanx, malware’s masquerade, DDoS’s deluge. Paul parsed perils: DNS amplification’s acoustic assault (amplifiers unwitting, queries quartered to quintupled payloads), cache’s corruption (poison’s payload, TTL’s tyranny), BGP’s brittleness (routes rerouted, traffic tunneled).
Countermeasures crystallized: DNSSEC’s digital deeds—RRSIG’s ratification, DS’s delegation—yet adoption’s anemia (1% in 2017) attenuates. Paul’s panacea: name-based bulwarks—reputation’s rubric (Sender Policy Framework’s sender scrutiny, Domain-based Message Authentication’s dominion), filtering’s firewall (blacklists’ ban, whitelists’ welcome). ThreatSTOP’s tapestry: DNS as sentinel, policies personalized—user’s umbrage over ISP’s imposition, EFF’s equivocation on censorship’s cusp. Kill chain’s kink: download’s dam, C2’s choke—malware muted mid-metamorphosis.
Paul’s prognosis: addresses’ atrophy, names’ ascendancy—chunked content’s cryptographic christening, bounties’ bounty for blemished bits. This nomenclature renaissance: security’s scaffold, internet’s integrity incarnate.
Nomenclature’s Nobility and Perils’ Palette
Paul proclaimed principles: network’s nexus, generative’s grace—DNS’s delegation, scaling’s symphony. Perils’ procession: amplification’s aria, cache’s contagion—BGP’s betrayal.
DNSSEC’s Deeds and Name’s Nativity
Signatures’ surety, adoption’s ache—reputation’s regime (SPF’s sieve, DMARC’s dominion). ThreatSTOP’s theorem: policies’ personalization, kill chain’s curtailment.
Bounties’ Beacon and Futures’ Forge
Addresses’ eclipse, chunks’ christening—bounties’ bite for blemish. Paul’s prophecy: names’ nativity, security’s scaffold.
Links:
[KotlinConf2017] A View State Machine for Network Calls on Android
Lecturer
Amanda Hill is an experienced Android developer currently working as a consultant at thoughtbot, a firm specializing in mobile and web application development. A graduate of Cornell University, Amanda previously served as the lead Android developer at Venmo, where she honed her expertise in building robust mobile applications. Based in San Francisco, she brings a practical perspective to Android development, with a passion for tackling challenges posed by evolving design specifications and enhancing user interfaces through innovative solutions.
Abstract
Managing network calls in Android applications requires robust solutions to handle dynamic UI changes. This article analyzes Amanda Hill’s presentation at KotlinConf 2017, which introduces a view state machine using Kotlin’s sealed classes to streamline network request handling. It explores the context of Android development challenges, the methodology of implementing a state machine, its practical applications, and the implications for creating adaptable, maintainable UI code. Amanda’s approach leverages Kotlin’s type-safe features to address the complexities of ever-changing design specifications, offering a reusable framework for Android developers.
Context of Android Network Challenges
At KotlinConf 2017, Amanda Hill addressed a common pain point in Android development: managing network calls amidst frequently changing UI requirements. As an Android developer at thoughtbot, Amanda drew on her experience at Venmo to highlight the frustrations caused by evolving design specs, which often disrupt UI logic tied to network operations. Traditional approaches to network calls, such as direct API integrations or ad-hoc state management, often lead to fragile code that struggles to adapt to UI changes, resulting in maintenance overhead and potential bugs.
Kotlin’s adoption in Android development, particularly after Google’s 2017 endorsement, provided an opportunity to leverage its type-safe features to address these challenges. Amanda’s presentation focused on creating a view state machine using Kotlin’s sealed classes, a feature that restricts class hierarchies to a defined set of states. This approach aimed to encapsulate UI states related to network calls, making Android applications more resilient to design changes and improving code clarity for developers working on dynamic, data-driven interfaces.
Methodology of the View State Machine
Amanda’s methodology centered on using Kotlin’s sealed classes to define a finite set of UI states for network calls, such as Loading, Success, and Error. Sealed classes ensure type safety by restricting possible states, allowing the compiler to enforce exhaustive handling of all scenarios. Amanda proposed a view model interface to standardize state interactions, with methods like getTitle and getPicture to format data for display. This interface serves as a contract, enabling different view models (e.g., for ice-cream cones) to implement specific formatting logic while adhering to a common structure.
In her live demo, Amanda illustrated building an Android app that uses the view state machine to manage network requests. The state machine processes API responses, mapping raw data (e.g., a calorie count of 120) into formatted outputs (e.g., “120 Calories”). By isolating formatting logic in the view model, independent of Android’s activity or fragment lifecycles, the approach ensures testability and reusability. Amanda emphasized flexibility, encouraging developers to customize the state machine for specific use cases, balancing genericity with adaptability to meet diverse application needs.
Practical Applications and Testability
The view state machine’s practical applications lie in its ability to simplify UI updates in response to network call outcomes. Amanda demonstrated how the state machine handles transitions between states, ensuring that UI components reflect the current state (e.g., displaying a loading spinner or an error message). By decoupling state logic from Android’s lifecycle methods, the approach reduces dependencies on activities or fragments, making the code more modular and easier to maintain. This modularity is particularly valuable in dynamic applications where UI requirements evolve frequently.
Testability is a key strength of Amanda’s approach. The view model’s independence from lifecycle components allows unit tests to verify formatting logic without involving Android’s runtime environment. For example, tests can assert that a view model correctly formats a calorie count, ensuring reliability across UI changes. Amanda’s focus on simplicity ensures that developers can implement the state machine without extensive refactoring, making it accessible for teams adopting Kotlin in Android projects.
Implications for Android Development
Amanda’s view state machine has significant implications for Android development, particularly in enhancing code maintainability and adaptability. By leveraging Kotlin’s sealed classes, developers can create robust, type-safe state management systems that reduce errors caused by unhandled states. The approach aligns with Kotlin’s emphasis on conciseness and safety, enabling developers to handle complex network interactions with minimal boilerplate. This is particularly valuable in fast-paced development environments where UI requirements change frequently, such as in fintech or e-commerce apps.
For the broader Android ecosystem, the state machine promotes best practices in separating concerns, encouraging developers to isolate business logic from UI rendering. Its testability supports agile development workflows, where rapid iterations and reliable testing are critical. Amanda’s encouragement to customize the state machine fosters a flexible approach, empowering developers to tailor solutions to specific project needs while leveraging Kotlin’s strengths. As Kotlin continues to dominate Android development, such innovations enhance its appeal for building scalable, user-friendly applications.
Conclusion
Amanda Hill’s presentation at KotlinConf 2017 introduced a powerful approach to managing network calls in Android using Kotlin’s sealed classes. The view state machine simplifies state management, enhances testability, and adapts to evolving UI requirements, addressing key challenges in Android development. By leveraging Kotlin’s type-safe features, Amanda’s methodology offers a reusable, maintainable framework that aligns with modern development practices. As Android developers increasingly adopt Kotlin, this approach underscores the language’s potential to streamline complex workflows, fostering robust and adaptable applications.
Links
[DotSecurity2017] DevOps and Security
In development’s dynamic deluge, where velocity’s vortex vanquishes venerable verities, security’s synthesis with speed spawns safer sanctums. Zane Lackey, Signal Sciences’ CTO and Etsy alumnus, shared this synthesis at dotSecurity 2017, recounting Etsy’s evolution from waterfall’s wane to DevOps’ dawn—100 deploys diurnal, security self-sufficiency’s sunrise. A sentinel schooled in scaling safeguards, Zane’s zeitgeist: shift from gatekeeper’s glower to enabler’s embrace, visibility’s vista vitalizing vigilance.
Zane’s zeitgeist zeroed on transformations: velocity’s vault (18 months to moments), infrastructure’s illusion (cloud’s churn, containers’ cadence), ownership’s osmosis (devs’ dominion over deploys). Security’s schism: outsourced obstruction to integrated impetus—feedback’s flux fostering fixes. Etsy’s ethos: blameless postmortems’ balm, chatops’ chorus—vulnerabilities vocalized via Slack’s summons, fixes’ fanfare.
Visibility’s vanguard: dashboards’ dawn, signals’ symphony—Signal Sciences’ sentry sensing surges. Feedback’s finesse: CI’s critique, pull requests’ probes—vulnerabilities voiced in vernacular. Zane’s vignette: researcher’s rapport, exploits eclipsed by ephemeral emends—positive parleys from proactive patches.
DevOps’ dividend: safety’s surge in speed’s slipstream—mortals empowered, mishaps mitigated.
Transformations’ Tide and Security’s Shift
Zane zeroed on zeal: velocity’s vault, cloud’s churn—ownership’s osmosis. Gatekeeper’s glower to enabler’s embrace.
Visibility’s Vista and Feedback’s Flux
Dashboards’ dawn, chatops’ chorus—CI’s critique, pull’s probes. Zane’s vignette: researcher’s rapport, ephemeral emends.
Links:
[ScalaDaysNewYork2016] Large-Scale Graph Analysis with Scala and Akka
Ben Fonarov, a Big Data specialist at Capital One, presented a compelling case study at Scala Days New York 2016 on building a large-scale graph analysis engine using Scala, Akka, and HBase. Ben detailed the architecture and implementation of Athena, a distributed time-series graph system designed to deliver integrated, real-time data to enterprise users, addressing the challenges of data overload in a banking environment.
Addressing Enterprise Data Needs
Ben Fonarov opened by outlining the motivation behind Athena: the need to provide integrated, real-time data to users at Capital One. Unlike traditional table-based thinking, Athena represents data as a graph, modeling entities like accounts and transactions to align with business concepts. Ben highlighted the challenges of data overload, with multiple data warehouses and ETL processes generating vast datasets. Athena’s visual interface allows users to define graph schemas, ensuring data is accessible in a format that matches their mental models.
Architectural Considerations
Ben described two architectural approaches to building Athena. The naive implementation used a single actor to process queries, which was insufficient for production-scale loads. The robust solution leveraged an Akka cluster, distributing query processing across nodes for scalability. A query parser translated user requests into graph traversals, while actors managed tasks and streamed results to users. This design ensured low latency and scalability, handling up to 200 billion nodes efficiently.
Streaming and Optimization
A key feature of Athena, Ben explained, was its ability to stream results in real time, avoiding the batch processing limitations of frameworks like TinkerPop’s Gremlin. By using Akka’s actor-based concurrency, Athena processes queries incrementally, delivering results as they are computed. Ben discussed optimizations, such as limiting the number of nodes per actor to prevent bottlenecks, and plans to integrate graph algorithms like PageRank to enhance analytical capabilities.
Future Directions and Community Engagement
Ben concluded by sharing future plans for Athena, including adopting a Gremlin-like DSL for graph traversals and integrating with tools like Spark and H2O. He emphasized the importance of community feedback, inviting developers to join Capital One’s data team to contribute to Athena’s evolution. Running on AWS EC2, Athena represents a scalable solution for enterprise graph analysis, poised to transform how banks handle complex data relationships.