Archive for the ‘General’ Category
[Devoxx FR 2021] IoT Open Source at Home
At Devoxx France 2021, François Mockers, an IoT enthusiast, delivered a 32-minute talk titled IoT open source à la maison (YouTube). This session shared his decade-long journey managing over 300 open-source IoT devices at home, likening home automation to production IT challenges. From connected light bulbs to zoned heating and sunlight-responsive shutters, Mockers explored protocols (ZigBee, Z-Wave, 433MHz, Wi-Fi) and tools (Home Assistant, ESPHome, Node-RED, Ansible, InfluxDB, Grafana). Aligned with Devoxx’s IoT and cloud themes, the talk offered practical insights for developers building cost-effective, secure home automation systems.
IoT: A Growing Home Ecosystem
Mockers began by highlighting the ubiquity of IoT devices, asking the audience how many owned connected devices (00:00:30–00:00:45). Most had over five, some over 50, and Mockers himself managed ~300, from Philips Hue bulbs to custom-built sensors (00:00:45–00:01:00). He started with commercial devices a decade ago but shifted to DIY solutions five years ago for cost savings and flexibility (00:00:15–00:00:30). His setup mirrors production environments, with “unhappy users” (family), legacy systems, and protocol sprawl, making it a relatable challenge for developers.
IoT Protocols: A Diverse Landscape
Mockers provided a technical overview of IoT protocols, each with unique strengths and challenges (00:01:00–00:08:15):
- ZigBee: Used by Philips Hue and IKEA, ZigBee supports lights, switches, plugs, motion sensors, and shutters in a mesh network for extended range. Devices like battery-powered switches consume minimal power, while plugged-in bulbs act as repeaters. Security issues, like a past Philips Hue hack allowing remote on/off control, highlight risks (00:01:15–00:02:15).
- Z-Wave: Similar to ZigBee but less common, used by Fibaro and Aeotec. It supports up to 232 devices (vs. ZigBee’s 65,000) with similar mesh functionality (00:02:15–00:02:45).
- 433.92 MHz: A frequency band hosting protocols like Oregon Scientific (sensors), Somfy (shutters), and Chacon/DIO (switches). These are cheap (~€10 vs. €50 for ZigBee/Z-Wave) but insecure, allowing neighbors’ devices to be controlled with a powerful transceiver. Car keys and security boxes also use this band, complicating urban use (00:02:45–00:04:00).
- Wi-Fi: Popular for startups like Netatmo (weather, security), LIFX (bulbs), and Tuya (garden devices). Wi-Fi devices are plug-and-play but power-hungry and reliant on external cloud APIs, posing risks if internet or vendor services fail. Security is a concern, as hacked Wi-Fi devices fueled major botnets (00:04:15–00:06:00).
- Bluetooth: Used for lights, speakers, and beacons, Bluetooth offers localization but requires phone proximity, limiting automation (00:06:00–00:06:30).
- Powerline (CPL) and Fil Pilote: Protocols like X10 and fil pilote (for electric radiators) use electrical wiring but depend on home wiring quality. Infrared signals control AV equipment and air conditioners but require line-of-sight and lack status feedback (00:06:45–00:08:00).
- LoRaWAN/Sigfox: Long-range protocols for smart cities, not home use (00:08:00–00:08:15).
Open-Source Tools for Home Automation
Mockers detailed his open-source toolchain, emphasizing flexibility and integration (00:08:15–00:20:45):
Home Assistant
Home Assistant, with 1,853 integrations, is Mockers’ central hub, supporting Alexa, Google Assistant, and Siri. It offers mobile apps, automation, and dashboards but becomes unwieldy with many devices. Mockers disabled its database and UI, using it solely for device discovery (00:08:30–00:09:45). It integrates with OpenHAB (2,526 integrations) and Domoticz (500 integrations) for broader device support.
ESPHome
ESPHome deploys ESP8266/ESP32 chips for custom sensors, connecting via Wi-Fi or Bluetooth. Mockers builds temperature, humidity, and light sensors for ~€10 (vs. €50 commercial equivalents). Configuration via YAML files integrates sensors directly into Home Assistant (00:10:00–00:11:45). Example:
esphome:
name: sensor_t1_mini
platform: ESP8266
api:
services:
- service: update
then:
- logger.log: "Updating firmware"
output:
- platform: gpio
pin: GPIO4
id: led
sensor:
- platform: bme280
temperature:
name: "Temperature"
pressure:
name: "Pressure"
humidity:
name: "Humidity"
Node-RED
Node-RED, with 3,485 integrations, handles automation via low-code event-driven flows. Mockers routes all Home Assistant events to Node-RED, creating rules like bridging 433MHz remotes to ZigBee bulbs. Its responsive dashboard outperforms Home Assistant’s (00:12:00–00:14:00).
InfluxDB and Grafana
InfluxDB stores time-series data from devices, replacing Home Assistant’s PostgreSQL. Mockers experimented with machine learning for anomaly detection and room occupancy prediction, though the latter was unpopular with his family (00:14:15–00:15:15). Grafana visualizes historical data, like weekly temperature trends, with polished dashboards (00:15:15–00:15:45).
Telegraf
Telegraf runs scripts for devices lacking Home Assistant integration, sending data to InfluxDB. It also monitors network and CPU usage .
Ansible and Pi-hole
Ansible automates Docker container deployment on Raspberry Pis, with roles for each service and a web page listing services . Pi-hole, a DNS-based ad blocker, caches queries and logs IoT device DNS requests, exposing suspicious activity.
Security and Deployment
Security is critical with IoT’s attack surface. Mockers recommends:
- A separate Wi-Fi network for IoT devices to isolate them from PCs .
- Limiting internet access for devices supporting local mode .
- A VPN for remote access, avoiding open ports .
- Factory-resetting devices before disposal to erase Wi-Fi credentials .
Deployment uses Docker containers on Raspberry Pis, managed by Ansible. Mockers avoids Kubernetes due to Raspberry Pi constraints, opting for custom scripts. Hardware includes Raspberry Pis, 433MHz transceivers, and Wemos ESP8266 boards with shields for sensors (00:19:45–00:20:45).
Audience Interaction and Lessons
Mockers engaged the audience with questions (00:00:30) and a Q&A , addressing:
- Usability for family (transparent for his wife, usable by his six-year-old)
- Home Assistant backups via Ansible and hourly NAS snapshots
- Insecure 433MHz devices (cheap but risky)
- Air conditioning control via infrared and fil pilote for radiators
- A universal remote consolidating five protocols, reducing complexity
- A humorous “divorce threat” from a beeping device, emphasizing user experience
Conclusion
Mockers’ talk showcased IoT as an accessible, developer-friendly domain using open-source tools. His setup, blending ZigBee, Wi-Fi, and DIY sensors with Home Assistant, Node-RED, and Grafana, offers a scalable, cost-effective model. Security and automation align with Devoxx’s cloud and IoT focus, inspiring developers to experiment safely. The key takeaway: quality data and user experience are critical for home automation success.
Resources
[DevoxxFR 2021] Maximizing Productivity with Programmable Ergonomic Keyboards: Insights from Alexandre Navarro
In an enlightening session at Devoxx France 2021, Alexandre Navarro, a seasoned Java backend developer, captivated the audience with a deep dive into the world of programmable ergonomic keyboards. His presentation, titled “Maximizing Your Productivity with a Programmable Ergonomic Keyboard,” unveils the historical evolution of keyboards, the principles of ergonomic design, and practical strategies for customizing keyboards to enhance coding efficiency. Alexandre’s expertise, honed over eleven years of typing in the Bépo layout and eight years on a TextBlade, offers developers a compelling case for rethinking their primary input device. This post explores the key themes of his talk, providing actionable insights for programmers seeking to optimize their workflow.
A Journey Through Keyboard History
Alexandre begins by tracing the lineage of keyboards, a journey that illuminates why our modern layouts exist. In the 1870s, early typewriters resembled pianos with alphabetical key arrangements, mere prototypes of today’s devices. By 1874, the Sholes and Glidden typewriter introduced a layout resembling QWERTY, a design often misunderstood as a deliberate attempt to slow typists to prevent jamming. Alexandre debunks this myth, explaining that QWERTY was shaped by practical needs, such as placing frequent English digraphs like “TH” and “ER” for efficient typing. The addition of a number row and user feedback further refined the layout, with quirks like the absence of dedicated “0” and “1” keys—substituted by “O” and “I”—reflecting telegraphy influences.
This historical context sets the stage for understanding why QWERTY persists despite its limitations. Alexandre notes that modern keyboards, like the iconic IBM model, retain QWERTY’s staggered rows and non-aligned letters, a legacy of mechanical constraints irrelevant to today’s technology. His narrative underscores a critical point: many developers use keyboards designed for a bygone era, prompting a reevaluation of tools that dominate their daily work.
Defining Ergonomic Keyboards
Transitioning to ergonomics, Alexandre outlines the hallmarks of a keyboard designed for comfort and speed. He categorizes ergonomic features into three domains: physical key arrangement, letter layout, and key customization. Physically, an ergonomic keyboard should be orthogonal (straight rows, unlike QWERTY’s stagger), symmetrical to match human hand anatomy, flat to reduce tendon strain, and accessible to minimize finger travel. These principles challenge conventional designs, where number pads skew symmetry and elevated keys stress wrists.
Alexandre highlights two exemplary models: the Keyboardio Model 01 and the ErgoDox. The Keyboardio, which he uses, boasts orthogonal, symmetrical keys and accessible layouts, while the ErgoDox offers customizable switches and curvature. These keyboards prioritize user comfort, aligning with the natural positioning of hands to reduce fatigue during long coding sessions. By contrasting these with traditional keyboards, Alexandre emphasizes that ergonomic design is not a luxury but a necessity for developers who spend hours typing.
Optimizing with Programmable Keyboards
The heart of Alexandre’s talk lies in programming keyboards to unlock productivity. Programmable keyboards, like the ErgoDox and Keyboardio, emerged around 2011, powered by microcontrollers that developers can flash with custom firmware, often using Arduino-based C code or graphical tools. This flexibility allows users to redefine key functions, creating layouts tailored to their workflows.
Alexandre introduces key programming concepts, such as layers (up to 32, akin to switching between QWERTY and number pad modes), macros (single keys triggering complex shortcuts like “Ctrl+F”), and tap/hold behaviors (e.g., a key typing “A” when tapped but acting as “Ctrl” when held). These features enable developers to streamline repetitive tasks, such as navigating code or executing IDE shortcuts, directly from their home row. Alexandre’s personal setup, using the Bépo layout optimized for French, exemplifies how customization can enhance efficiency, even for English-heavy programming tasks.
Why Embrace Ergonomic Keyboards?
Alexandre concludes by addressing the “why” behind adopting ergonomic keyboards. Beyond speed, these devices offer comfort, reducing the risk of repetitive strain injuries—a concern for developers typing extensively. He shares his experience with the Bépo layout, which, while not optimized for English, outperforms QWERTY and AZERTY due to shared frequent letters and better hand alternation. For those hesitant to switch, Alexandre suggests starting with a blank keyboard to learn touch typing, ensuring all fingers engage without glancing at keys.
His call to action resonates with developers: mastering your keyboard is as essential as mastering your IDE. By investing in an ergonomic, programmable keyboard, programmers can transform a mundane tool into a productivity powerhouse. Alexandre’s insights, grounded in years of experimentation, inspire a shift toward tools that align with modern coding demands.
Links:
- Watch the full presentation: https://www.youtube.com/watch?v=zCMra9RgCzw
- Follow Devoxx France on LinkedIn: https://www.linkedin.com/in/devoxxfrance/
- Follow Devoxx France on Twitter: https://twitter.com/DevoxxFR
- Visit Devoxx France: https://www.devoxx.fr/
[NodeCongress2021] Introduction to the AWS CDK: Infrastructure as Node – Colin Ihrig
In the evolving landscape of cloud computing, developers increasingly seek tools that bridge the gap between application logic and underlying infrastructure. Colin Ihrig’s exploration of the AWS Cloud Development Kit (CDK) offers a compelling entry point into this domain, emphasizing how Node.js enthusiasts can harness familiar programming paradigms to orchestrate cloud resources seamlessly. By transforming abstract infrastructure concepts into executable code, the CDK empowers teams to move beyond cumbersome templates, fostering agility in deployment pipelines.
The CDK stands out as an AWS-centric framework for infrastructure as code, akin to established solutions like Terraform but tailored for those versed in high-level languages. Supporting JavaScript, TypeScript, Python, Java, and C#, it abstracts the intricacies of CloudFormation—the AWS service for defining and provisioning resources via JSON or YAML—into intuitive, object-oriented constructs. This abstraction not only simplifies the creation of scalable stacks but also preserves CloudFormation’s core advantages, such as consistent deployments and drift detection, where configurations are automatically reconciled with actual states.
Streamlining Cloud Architecture with Node.js Constructs
At its core, the CDK operates through a hierarchy of reusable building blocks called constructs, which encapsulate AWS services like S3 buckets, Lambda functions, or EC2 instances. Colin illustrates this with a straightforward Node.js example: instantiating a basic S3 bucket involves minimal lines of code, contrasting sharply with the verbose CloudFormation equivalents that often span pages. This approach leverages Node.js’s event-driven nature, allowing developers to define dependencies declaratively while integrating seamlessly with existing application codebases.
One of the CDK’s strengths lies in its synthesis process, where high-level definitions compile into CloudFormation templates during the “synth” phase. This generated assembly includes not just templates but also ancillary artifacts, such as bundled Docker images for Lambda deployments. For Node.js practitioners, this means unit testing infrastructure alongside application logic—employing Jest for snapshot validation of synthesized outputs—without ever leaving the familiar ecosystem. Colin’s demonstration underscores how such integration reduces context-switching, enabling rapid iteration on cloud-native designs like serverless APIs or data pipelines.
Moreover, the CDK’s asset management handles local files and images destined for S3 or ECR, necessitating a one-time bootstrapping per environment. This setup deploys a dedicated toolkit stack, complete with storage buckets and IAM roles, ensuring secure asset uploads. While incurring nominal AWS charges, it streamlines workflows, as evidenced by Colin’s walkthrough of provisioning a static website: a few constructs deploy a public-read bucket, sync local assets, and expose the site via a custom domain—potentially augmented with Route 53 for DNS or CloudFront for edge caching.
Navigating Deployment Cycles and Best Practices
Deployment via the CDK CLI mirrors npm workflows, with commands like “cdk deploy” orchestrating updates intelligently, applying only deltas to minimize disruption. Colin highlights the CLI’s versatility—listing stacks with “cdk ls,” diffing changes via “cdk diff,” or injecting runtime context for dynamic configurations—positioning it as an extension of Node.js tooling. For cleanup, “cdk destroy” reverses provisions, though manual verification in the AWS console is advisable, given occasional bootstrap remnants.
Colin wraps by addressing adoption barriers, noting the CDK’s maturity since its 2019 general availability and its freedom from vendor lock-in—given AWS’s ubiquity among cloud-native developers. Drawing from a Cloud Native Computing Foundation survey, he points to JavaScript’s dominance in server-side environments and AWS’s 62% market share, arguing that the CDK aligns perfectly with Node.js’s ethos of unified tooling across frontend, backend, and operations.
Through these insights, Colin not only demystifies infrastructure provisioning but also inspires Node.js developers to embrace declarative coding for resilient, observable systems. Whether scaling monoliths to microservices or experimenting with ephemeral environments, the CDK emerges as a pivotal ally in modern cloud engineering.
Links:
[KotlinConf2019] Exploring the Power of Kotlin/JS
Sebastian Aigner, a developer advocate at JetBrains, captivated KotlinConf2019 with his deep dive into Kotlin/JS, the JavaScript target for Kotlin. With a passion for web development, Sebastian showcased how recent advancements make Kotlin/JS a compelling choice for building web applications. From streamlined tooling to seamless JavaScript interoperability, he outlined the current state and future potential of Kotlin/JS, inspiring both newcomers and seasoned developers to leverage Kotlin’s paradigms in the browser.
Simplifying Development with the New Gradle Plugin
Kotlin/JS has evolved significantly, with the new Kotlin/JS Gradle plugin emerging as the cornerstone for browser and Node.js development. Sebastian explained that this plugin unifies previously fragmented approaches, replacing deprecated plugins like kotlin2js and kotlin-frontend. Its uniform Gradle DSL simplifies project setup, offering sensible defaults for Webpack bundling without requiring extensive configuration. For developers transitioning to multi-platform projects, the plugin’s compatibility with the Kotlin multi-platform DSL minimizes changes, enabling seamless integration of additional targets. By automating JavaScript environment setup, including yarn and package.json, the plugin empowers Kotlin developers to focus on coding rather than managing complex JavaScript tooling.
Mastering Dependency Management with npm
The JavaScript ecosystem, with over a million npm packages, offers unparalleled flexibility, and Kotlin/JS integrates effortlessly with this vast library. Sebastian highlighted how the Gradle plugin manages npm dependencies directly, automatically updating package.json when dependencies like React or styled-components are added. This eliminates the need for separate JavaScript environment setup, saving time, especially on non-standard platforms like Windows. Developers can import Kotlin libraries (e.g., coroutines, serialization) alongside JavaScript packages, with Gradle handling the JavaScript-specific versions. This unified approach bridges the gap between Kotlin’s structured ecosystem and JavaScript’s dynamic world, making dependency management intuitive even for those new to JavaScript.
Bridging Kotlin and TypeScript with Dukat
Interoperating with JavaScript’s dynamic typing can be challenging, but Sebastian introduced Dukat, an experimental tool that converts TypeScript declaration files into Kotlin external declarations. By leveraging TypeScript’s de facto standard for type definitions, Dukat enables type-safe access to npm packages, such as left-pad or react-minimal-pie-chart. While manual external declarations require tedious annotation, Dukat automates this process, generating headers for packages with TypeScript support or community-contributed definitions. Sebastian encouraged early adoption to provide feedback, noting that Dukat already powers browser and Node.js API wrappers. This tool promises to simplify integration with JavaScript libraries, reducing the friction of crossing the static-dynamic typing divide.
Enhancing Testing and Debugging with Source Maps
Testing and debugging are critical for robust applications, and Kotlin/JS delivers with integrated tools. Sebastian demonstrated how the Gradle plugin supports platform-specific test runners like Karma, allowing tests to run across browsers (e.g., Firefox, headless Chrome). Source maps, automatically generated since Kotlin 1.3.60, provide detailed stack traces for Node.js and interactive debugging in browser DevTools. Developers can set breakpoints in Kotlin code, inspect variables, and trace errors directly in Chrome’s console, as shown in Sebastian’s pong game demo. Gradle test reports further enhance diagnostics, offering HTML-based insights into test failures, making Kotlin/JS development as robust as its JVM counterpart.
Optimizing with the IR Backend
The upcoming Intermediate Representation (IR) backend marks a significant leap for Kotlin/JS. Sebastian outlined its benefits, including aggressive code size optimizations through dead code elimination. Unlike the current backend, which may ship the entire standard library, the IR backend, combined with Google Closure Compiler, reduces zipped file sizes dramatically—down to 30 KB from 3.9 MB in some cases. Faster compilation speeds, especially for incremental builds, enhance developer productivity, particularly in continuous build scenarios with Webpack’s dev server. The IR backend also supports platform-agnostic compiler plugins, simplifying multi-platform development. Sebastian noted that pre-alpha IR support in Kotlin 1.3.70 requires manual exports due to its closed-world assumption, urging developers to explore early releases.
Looking Ahead: WebAssembly and Framework Support
Sebastian concluded with a glimpse into Kotlin/JS’s future, highlighting potential support for ECMAScript 6 modules and frameworks like Angular and Vue.js. While JetBrains provides React wrappers, extending first-class support to other frameworks requires addressing their unique tooling and compilers. The IR backend also opens doors to WebAssembly, enabling Kotlin to target browsers more efficiently. Though no timelines were promised, these explorations reflect JetBrains’ commitment to aligning Kotlin/JS with modern web trends. Sebastian’s call to action—trying the Code Quiz app at the Kotlin booth and contributing to Dukat—emphasized community involvement in shaping Kotlin/JS’s evolution.
Links:
[KotlinConf2019] Simplifying Async APIs with Kotlin Coroutines
Tom Hanley, a senior software engineer at Toast, enthralled KotlinConf2019 with a case study on using Kotlin coroutines to tame a complex asynchronous API for an Android card reader. Drawing from his work integrating a third-party USB-connected card reader, Tom shared how coroutines transformed callback-heavy code into clean, sequential logic. His practical insights on error handling, debugging, and testing offered a roadmap for developers grappling with legacy async APIs.
Escaping Callback Hell
Asynchronous APIs often lead to callback hell, where nested callbacks make code unreadable and error-prone. Tom described the challenge of working with a third-party Android SDK for a card reader, which relied on void methods and listener interfaces for data retrieval. A naive implementation to fetch device info involved mutable variables and blocking loops, risking infinite loops and thread-safety issues. Such approaches, common with legacy APIs, complicate maintenance and scalability. Tom emphasized that coroutines offer a lifeline, allowing developers to wrap messy APIs in a clean, non-blocking interface that reads like sequential code, preserving the benefits of asynchrony.
Wrapping the Card Reader API with Coroutines
To streamline the card reader API, Tom developed a Kotlin extension that replaced callback-based interactions with suspend functions. The original API required a controller to send commands and a listener to receive asynchronous responses, such as device info or errors. By introducing a suspend getDeviceInfo function, Tom enabled callers to await results directly. This extension ensured referential transparency, where functions clearly return their results, and allowed callers to control asynchrony—waiting for completion or running tasks concurrently. The approach also enforced sequential execution for dependent operations, critical for the card reader’s connection and transaction workflows.
Communicating with Channels
Effective inter-thread communication was key to the extension’s success. Rather than relying on shared mutable variables, Tom used Kotlin channels to pass events and errors between coroutines. When the listener received device info, it sent the data to a public channel; errors were handled similarly. The controller extension used a select expression to await the first event from either the device info or error channel, throwing errors or returning results as needed. Channels, with their suspending send and receive operations, provided a thread-safe alternative to blocking queues. Despite their experimental status in Kotlin 1.3, Tom found them production-ready, supported by smooth IDE migration paths.
Mastering Exception Handling
Exception handling in coroutines requires careful design, as Tom learned through structured concurrency introduced in Kotlin 1.3. This feature enforces a parent-child relationship, where canceling a parent coroutine cancels its children. However, Tom discovered that a child’s failure propagates upward, potentially crashing the app in launch coroutines if uncaught. For async coroutines, exceptions are deferred until await is called, allowing try-catch blocks to handle them. To isolate failures, Tom used supervisorJob to prevent child cancellations from affecting siblings and coroutineScope blocks to group all-or-nothing operations, ensuring robust error recovery for the card reader’s unreliable USB connection.
Debugging and Testing Coroutines
Debugging coroutines posed initial challenges, but Tom leveraged powerful tools to simplify the process. Enabling debug mode via system properties assigns unique names to coroutines, appending them to thread names and enhancing stack traces with creation details. The debug agent, a JVM tool released post-project, tracks live coroutines and dumps their state, aiding deadlock diagnosis. For testing, Tom wrapped suspend functions in runBlocking blocks, enabling straightforward unit tests. He advised using launch and async only when concurrency is needed, marking functions as suspend to simplify testing by allowing callers to control execution context.
Moving Beyond Exceptions with Sealed Classes
Reflecting on exception handling’s complexity, Tom shifted to sealed classes for error handling. Initially, errors from the card reader were wrapped in exceptions, but frequent USB failures made catching them cumbersome. Exceptions also obscured control flow and hindered functional purity. Inspired by arguments likening exceptions to goto statements, Tom adopted domain-specific sealed classes (e.g., Success, Failure, Timeout) for each controller command’s result. This approach enforced explicit error handling via when statements, improved readability, and allowed result types to evolve independently, aligning with the card reader’s diverse error scenarios.
Links:
[KotlinConf2019] Kotlin Coroutines: Mastering Cancellation and Exceptions with Florina Muntenescu & Manuel Vivo
Kotlin coroutines have revolutionized asynchronous programming on Android and other platforms, offering a way to write non-blocking code in a sequential style. However, as Florina Muntenescu and Manuel Vivo, both prominent Android Developer Experts then at Google, pointed out at KotlinConf 2019, the “happy path” is only part of the story. Their talk, “Coroutines! Gotta catch ’em all!” delved into the critical aspects of coroutine cancellation and exception handling, providing developers with the knowledge to build robust and resilient asynchronous applications.
Florina and Manuel highlighted a common scenario: coroutines work perfectly until an error occurs, a timeout is reached, or a coroutine needs to be cancelled. Understanding how to manage these situations—where to handle errors, how different scopes affect error propagation, and the impact of launch vs. async—is crucial for a good user experience and stable application behavior.
Structured Concurrency and Scope Management
A fundamental concept in Kotlin coroutines is structured concurrency, which ensures that coroutines operate within a defined scope, tying their lifecycle to that scope. Florina Muntenescu and Manuel Vivo emphasized the importance of choosing the right CoroutineScope for different situations. The scope dictates how coroutines are managed, particularly concerning cancellation and how exceptions are propagated.
They discussed:
* CoroutineScope: The basic building block for managing coroutines.
* Job and SupervisorJob: A Job in a coroutine’s context is responsible for its lifecycle. A key distinction is how they handle failures of child coroutines. A standard Job will cancel all its children and itself if one child fails. In contrast, a SupervisorJob allows a child coroutine to fail without cancelling its siblings or the supervisor job itself. This is critical for UI components or services where one failed task shouldn’t bring down unrelated operations. The advice often given is to use SupervisorJob when you want to isolate failures among children.
* Scope Hierarchy: How scopes can be nested and how cancellation or failure in one part of the hierarchy affects others. Understanding this is key to preventing unintended cancellations or unhandled exceptions.
Cancellation: Graceful Termination of Coroutines
Effective cancellation is vital for resource management and preventing memory leaks, especially in UI applications where operations might become irrelevant if the user navigates away. Florina and Manuel would have covered how coroutines support cooperative cancellation. This means that suspending functions in the kotlinx.coroutines library are generally cancellable; they check for cancellation requests and throw a CancellationException when one is detected.
Key points regarding cancellation included:
* Calling job.cancel() initiates the cancellation of a coroutine and its children.
* Coroutines must cooperate with cancellation by periodically checking isActive or using cancellable suspending functions. CPU-bound work in a loop that doesn’t check for cancellation might not stop as expected.
* CancellationException is considered a normal way for a coroutine to complete due to cancellation and is typically not logged as an unhandled error by default exception handlers.
Exception Handling: Catching Them All
Handling exceptions correctly in asynchronous code can be tricky. Florina and Manuel’s talk aimed to clarify how exceptions propagate in coroutines and how they can be caught.
They covered:
* launch vs. async:
* With launch, exceptions are treated like uncaught exceptions in a thread—they propagate up the job hierarchy. If not handled, they can crash the application (depending on the root scope’s context and CoroutineExceptionHandler).
* With async, exceptions are deferred. They are stored within the Deferred result and are only thrown when await() is called on that Deferred. This means if await() is never called, the exception might go unnoticed unless explicitly handled.
* CoroutineExceptionHandler: This context element can be installed in a CoroutineScope to act as a global handler for uncaught exceptions within coroutines started by launch in that scope. It allows for centralized error logging or recovery logic. They showed examples of how and where to install this handler effectively, for example, in the root coroutine or as a direct child of a SupervisorJob to catch exceptions from its children.
* try-catch blocks: Standard try-catch blocks can be used within a coroutine to handle exceptions locally, just like in synchronous code. This is often the preferred way to handle expected exceptions related to specific operations.
The speakers stressed that uncaught exceptions will always propagate, so it’s crucial to “catch ’em all” to avoid unexpected behavior or crashes. Their presentation aimed to provide clear patterns and best practices to ensure that developers could confidently manage both cancellation and exceptions, leading to more robust and user-friendly Kotlin applications.
Links:
[KotlinConf2019] Kotless: A Kotlin-Native Approach to Serverless with Vladislav Tankov
Serverless computing has revolutionized how applications are deployed and scaled, but it often comes with its own set of complexities, including managing deployment DSLs like Terraform or CloudFormation. Vladislav Tankov, then a Software Developer at JetBrains, introduced Kotless at KotlinConf 2019 as a Kotlin Serverless Framework designed to simplify this landscape. Kotless aims to eliminate the need for external deployment DSLs by allowing developers to define serverless applications—including REST APIs and event handling—directly within their Kotlin code using familiar annotations. The project can be found on GitHub at github.com/JetBrains/kotless.
Vladislav’s presentation provided an overview of the Kotless Client API, demonstrated its use with a simple example, and delved into the architecture and design concepts behind its code-to-deployment pipeline. The core promise of Kotless is to make serverless computations easily understandable for anyone familiar with event-based architectures, particularly those comfortable with JAX-RS-like annotations.
Simplifying Serverless Deployment with Kotlin Annotations
The primary innovation of Kotless, as highlighted by Vladislav Tankov, is its ability to interpret Kotlin code and annotations to automatically generate the necessary deployment configurations for cloud providers like AWS (initially). Instead of writing separate configuration files in YAML or other DSLs, developers can define their serverless functions, API gateways, permissions, and scheduled events using Kotlin annotations directly on their functions and classes.
For example, creating a REST API endpoint could be as simple as annotating a Kotlin function with @Get("/mypath"). Kotless then parses these annotations during the build process and generates the required infrastructure definitions, deploys the lambdas, and configures the API Gateway. This approach significantly reduces boilerplate and the cognitive load associated with learning and maintaining separate infrastructure-as-code tools. Vladislav emphasized that a developer only needs familiarity with these annotations to create and deploy a serverless REST API application.
Architecture and Code-to-Deployment Pipeline
Vladislav Tankov provided insights into the inner workings of Kotless, explaining its architecture and the pipeline that transforms Kotlin code into a deployed serverless application. This process generally involves:
1. Annotation Processing: During compilation, Kotless processes the special annotations in the Kotlin code to understand the desired serverless architecture (e.g., API routes, event triggers, scheduled tasks).
2. Terraform Generation (Initially): Kotless then generates the necessary infrastructure-as-code configurations (initially using Terraform as a backend for AWS) based on these processed annotations. This includes defining Lambda functions, API Gateway resources, IAM roles, and event source mappings.
3. Deployment: Kotless handles the deployment of these generated configurations and the application code to the target cloud provider.
He also touched upon optimizations built into Kotless, such as “outer warming” of lambdas to reduce cold starts and optimizing lambdas by size. This focus on performance and ease of use is central to Kotless’s philosophy. The framework aims to abstract away the underlying complexities of serverless platforms, allowing developers to concentrate on their application logic.
Future Directions and Multiplatform Aspirations
Looking ahead, Vladislav Tankov discussed the future roadmap for Kotless, including ambitious plans for supporting Kotlin Multiplatform Projects (MPP). This would allow developers to choose different runtimes for their lambdas—JVM, JavaScript, or even Kotlin/Native—depending on the task and performance requirements. Supporting JavaScript lambdas, for example, could open up compatibility with platforms like Google Cloud Platform more broadly, which at the time had better support for JavaScript runtimes than JVM for serverless functions.
Other planned enhancements included extended event handling for custom events on AWS and other platforms, and continued work on performance optimizations. The vision for Kotless was to provide a comprehensive and flexible serverless solution for Kotlin developers, empowering them to build efficient and scalable cloud-native applications with minimal friction. Vladislav encouraged attendees to try Kotless and contribute to its development, positioning it as a community-driven effort to improve the Kotlin serverless experience.
Links:
[KotlinConf2019] Desktop Development with TornadoFX: Kotlinizing JavaFX with Liz Keogh
JavaFX, the successor to Swing for creating rich client applications in Java, offers a modern approach to desktop UI development with a cleaner separation of function and style. However, working directly with JavaFX can sometimes involve verbosity and untyped stylesheets. Liz Keogh, a renowned Lean and Agile consultant and a core member of the BDD community, presented a compelling alternative at KotlinConf 2019: TornadoFX. Her talk explored how TornadoFX, a Kotlin wrapper around JavaFX, simplifies desktop development with type-safe builders, stylesheets, and the syntactic sugar Kotlin developers appreciate. Liz Keogh’s consultancy work can often be found via lunivore.com.
TornadoFX aims to make JavaFX development more idiomatic and enjoyable for Kotlin developers. It leverages Kotlin’s powerful features to reduce boilerplate and introduce modern development patterns like dependency-injected MVC/MVP. The official website for the framework is tornadofx.io.
Simplifying JavaFX with Kotlin’s Elegance
Liz Keogh’s session highlighted how TornadoFX enhances the JavaFX experience. Key advantages include:
* Type-Safe Builders: Instead of manually instantiating and configuring UI components in JavaFX, TornadoFX provides type-safe builders. This allows for a more declarative and concise way to define UI layouts, reducing the chance of runtime errors and improving code readability.
* Type-Safe Stylesheets: JavaFX typically uses CSS for styling, which is not type-safe and can be cumbersome. TornadoFX introduces type-safe CSS, allowing styles to be defined directly in Kotlin code with autocompletion and compile-time checking. This makes styling more robust and easier to manage.
* Dependency Injection and Architectural Patterns: TornadoFX incorporates support for architectural patterns like Model-View-Controller (MVC) and Model-View-Presenter (MVP) with built-in dependency injection. This helps in structuring desktop applications in a clean, maintainable, and testable way.
* Kotlin’s Syntactic Sugar: The framework makes full use of Kotlin’s features, such as extension functions, lambdas, and DSL capabilities, to create a fluent and expressive API for building UIs.
Liz demonstrated these features through practical examples, showing how quickly developers can create sophisticated desktop applications with significantly less code compared to plain JavaFX.
Practical Application: Building a Desktop Game
To illustrate the power and ease of use of TornadoFX, Liz Keogh built a desktop version of the game “Don’t Get Mad!” (a variant of Ludo/Pachisi) live during her presentation. This hands-on approach allowed attendees to see TornadoFX in action, from setting up the project to building the UI, implementing game logic, and handling user interactions.
She showcased how to:
* Define views and components using TornadoFX’s builders.
* Apply styles using type-safe CSS.
* Manage application state and events.
* Integrate game logic written in Kotlin.
While focusing on TornadoFX, Liz also touched upon broader software development principles, such as the importance of automated testing. She candidly mentioned her preference for unit tests and the need for more in her demo project due to deadline constraints, reminding attendees about the test pyramid. This practical demonstration of building a game provided a tangible example of what’s possible with TornadoFX and how it can accelerate desktop development.
TornadoFX and the Kotlin Ecosystem
Liz Keogh’s presentation positioned TornadoFX as a valuable addition to the Kotlin ecosystem, particularly for developers looking to build desktop applications. By providing a more Kotlin-idiomatic layer over JavaFX, TornadoFX lowers the barrier to entry for desktop development and makes it a more attractive option for the Kotlin community.
She also mentioned another personal project, “K Golf” (Kotlin Game of Life), a JavaFX application she uses for teaching Kotlin, hinting at her passion for both Kotlin and creating engaging learning experiences. Her talk inspired many Kotlin developers to explore TornadoFX for their desktop application needs, showcasing it as a productive and enjoyable way to leverage their Kotlin skills in a new domain. The session underscored the theme of Kotlin’s versatility, extending its reach effectively into desktop development.
Links:
[SpringIO2019] Cloud Native Spring Boot Admin by Johannes Edmeier
At Spring I/O 2019 in Barcelona, Johannes Edmeier, a seasoned developer from Germany, captivated attendees with his deep dive into managing Spring Boot applications in Kubernetes environments using Spring Boot Admin. As the maintainer of this open-source project, Johannes shared practical insights into integrating Spring Boot Admin with Kubernetes via the Spring Cloud Kubernetes project. His session illuminated how developers can gain operational visibility and control without altering application code, making it a must-know tool for cloud-native ecosystems. This post explores Johannes’ approach, highlighting its relevance for modern DevOps.
Understanding Spring Boot Admin
Spring Boot Admin, a four-and-a-half-year-old project boasting over 17,000 GitHub stars, is an Apache-licensed tool designed to monitor and manage Spring Boot applications. Johannes, employed by ConSol, a German consultancy, dedicates 20% of his work time—and significant personal hours—to its development. The tool provides a user-friendly interface to visualize metrics, logs, and runtime configurations, addressing the limitations of basic monitoring solutions like plain metrics or logs. For Kubernetes-deployed applications, it leverages Spring Boot Actuator endpoints to deliver comprehensive insights without requiring code changes or new container images.
The challenge in cloud-native environments lies in achieving visibility into distributed systems. Johannes emphasized that Kubernetes, a common denominator across cloud vendors, demands robust monitoring tools. Spring Boot Admin meets this need by integrating with Spring Cloud Kubernetes, enabling service discovery and dynamic updates as services scale or fail. This synergy ensures developers can manage applications seamlessly, even in complex, dynamic clusters.
Setting Up Spring Boot Admin on Kubernetes
Configuring Spring Boot Admin for Kubernetes is straightforward, as Johannes demonstrated. Developers start by including the Spring Boot Admin starter server dependency, which bundles the UI and REST endpoints, and the Spring Cloud Kubernetes starter for service discovery. These dependencies, managed via Spring Cloud BOM, simplify setup. Johannes highlighted the importance of enabling the admin server, discovery client, and scheduling annotations in the application class to ensure health checks and service updates function correctly. A common pitfall, recently addressed in the documentation, is forgetting to enable scheduling, which prevents dynamic service updates.
For Kubernetes deployment, Johannes pre-built a Docker image and configured a service account with role-based access control (RBAC) to read pod, service, and endpoint data. This minimal RBAC setup avoids unnecessary permissions, enhancing security. An ingress and service complete the deployment, allowing access to the Spring Boot Admin UI. Johannes showcased a wallboard view, ideal for team dashboards, and demonstrated real-time monitoring by simulating a service failure, which triggered a yellow “restricted” status and subsequent recovery as Kubernetes rescheduled the pod.
Enhancing Monitoring with Actuator Endpoints
Spring Boot Admin’s power lies in its integration with Spring Boot Actuator, which exposes endpoints like health, info, metrics, and more. By default, only health and info endpoints are exposed, but Johannes showed how to expose all endpoints using a Kubernetes environment variable (management.endpoints.web.exposure.include=*). This unlocks detailed views for metrics, environment properties, beans, and scheduled tasks. For instance, the health endpoint provides granular details when set to “always” show details, revealing custom health indicators like database connectivity.
Johannes also highlighted advanced features, such as rendering Swagger UI links via the info endpoint’s properties, simplifying access to API documentation. For security, he recommended isolating Actuator endpoints on a separate management port (e.g., 9080) to prevent public exposure via the main ingress. Spring Cloud Kubernetes facilitates this by allowing developers to specify the management port for discovery, ensuring Spring Boot Admin accesses Actuator endpoints securely while keeping them hidden from external traffic.
Customization and Security Considerations
Spring Boot Admin excels in customization, catering to specific monitoring needs. Johannes demonstrated how to add top-level links to external tools like Grafana or Kibana, or embed them as iframes, reducing the need to memorize URLs. For advanced use cases, developers can create custom views using Vue.js, as Johannes did to toggle application status (e.g., setting a service to “out of service”). This flexibility extends to notifications, supporting Slack, Microsoft Teams, and email via simple configurations, with a test SMTP server like MailHog for demos.
Security is a critical concern, as Spring Boot Admin proxies requests to Actuator endpoints. Johannes cautioned against exposing the admin server publicly, citing an unsecured instance found via Google. He outlined three security approaches: no authentication (not recommended), session-based authentication with cookies, or OAuth2 with token forwarding, where the target application validates access. A service account handles background health checks, ensuring minimal permissions. For Keycloak integration, Johannes referenced a blog post by his colleague Tomas, showcasing Spring Boot Admin’s compatibility with modern security frameworks.
Runtime Management and Future Enhancements
Spring Boot Admin empowers runtime management, a standout feature Johannes showcased. The loggers endpoint allows dynamic adjustment of logging levels, with a forthcoming feature to set levels across all instances simultaneously. Other endpoints, like Jolokia for JMX interaction, enable runtime reconfiguration but require caution due to their power. Heap and thread dump endpoints aid debugging but risk exposing sensitive data or overwhelming resources. Johannes also previewed upcoming features, like minimum instance checks, enhancing Spring Boot Admin’s robustness in production.
For Johannes, Spring Boot Admin is more than a monitoring tool—it’s a platform for operational excellence. By integrating seamlessly with Kubernetes and Spring Boot Actuator, it addresses the complexities of cloud-native applications, empowering developers to focus on delivering value. His session at Spring I/O 2019 underscores its indispensable role in modern software ecosystems.
Links:
Kotlin Native Concurrency Explained by Kevin Galligan
Navigating Kotlin/Native’s Concurrency Model
At KotlinConf 2019 in Copenhagen, Kevin Galligan, a partner at Touchlab with over 20 years of software development experience, delivered a 39-minute talk on Kotlin/Native’s concurrency model. Kevin Galligan explored the restrictive yet logical rules governing state and concurrency in Kotlin/Native, addressing their controversy among JVM and mobile developers. He explained the model’s mechanics, its rationale, and best practices for multiplatform development. This post covers four key themes: the core rules of Kotlin/Native concurrency, the role of workers, the impact of freezing state, and the introduction of multi-threaded coroutines.
Core Rules of Kotlin/Native Concurrency
Kevin Galligan began by outlining Kotlin/Native’s two fundamental concurrency rules: mutable state is confined to a single thread, and immutable state can be shared across multiple threads. These rules, known as thread confinement, mirror mobile development practices where UI updates are restricted to the main thread. In Kotlin/Native, the runtime enforces these constraints, preventing mutable state changes from background threads to avoid race conditions. Kevin emphasized that while these rules feel restrictive compared to the JVM’s shared-memory model, they align with modern platforms like Go and Rust, which also limit unrestricted shared state.
The rationale behind this model, as Kevin explained, is to reduce concurrency errors by design. Unlike the JVM, which trusts developers to manage synchronization, Kotlin/Native’s runtime verifies state access at runtime, crashing if rules are violated. This strictness, though initially frustrating, encourages intentional state management. Kevin noted that after a year of working with Kotlin/Native, he found the model simple and effective, provided developers embrace its constraints rather than fight them.
Workers as Concurrency Primitives
A central concept in Kevin’s talk was the Worker, a Kotlin/Native concurrency queue similar to Java’s ExecutorService or Android’s Handler and Looper. Workers manage a job queue processed by a private thread, ensuring thread confinement. Kevin illustrated how a Worker executes tasks via the execute function, which takes a producer function to verify state transfer between threads. The execute function supports safe and unsafe transfer modes, with Kevin strongly advising against the unsafe mode due to its bypassing of state checks.
Using a code example, Kevin demonstrated passing a data class to a Worker. The runtime freezes the data—making it immutable—to comply with concurrency rules, preventing illegal state transfers. He highlighted that while Worker is a core primitive, developers rarely use it directly, as higher-level abstractions like coroutines are preferred. However, understanding Worker is crucial for grasping Kotlin/Native’s concurrency mechanics, especially when debugging state-related errors like IllegalStateTransfer.
Freezing State and Its Implications
Kevin Galligan delved into the concept of freezing, a runtime mechanism that designates objects as immutable for safe sharing across threads. Freezing is a one-way operation, recursively applying to an object and its references, with no unfreeze option. This ensures thread safety but introduces challenges, as frozen objects cannot be mutated, leading to InvalidMutabilityException errors if attempted.
In a practical example, Kevin showed how capturing mutable state in a background task can inadvertently freeze an entire object graph, causing runtime failures. He introduced tools like ensureNeverFrozen to debug unintended freezing and stressed intentional mutability—keeping mutable state local to one thread and transforming data into frozen copies for sharing. Kevin also discussed Atomic types, which allow limited mutation of frozen state, but cautioned against overusing them due to performance and memory issues. His experience at Touchlab revealed early missteps with global state and Atomics, leading to a shift toward confined state models.
Multi-Threaded Coroutines and Future Directions
A significant update in Kevin’s talk was the introduction of multi-threaded coroutines, enabled by a draft pull request in 2019. Previously, Kotlin/Native coroutines were single-threaded, limiting concurrency and stunting library development. The new model allows coroutines to switch threads using dispatchers, with data passed between threads frozen to maintain strict mode. Kevin demonstrated replacing a custom background function with a coroutine-based approach, simplifying concurrency while adhering to state rules.
This development clarified the longevity of strict mode, countering speculation about a relaxed mode that would mimic JVM-style shared memory. Kevin noted that multi-threaded coroutines unblocked library development, citing projects like AtomicFu and SQLDelight. He also highlighted Touchlab’s Droidcon app, which adopted multi-threaded coroutines for production, showcasing their practical viability. Looking forward, Kevin anticipated increased community adoption and library growth in 2020, urging developers to explore the model despite its learning curve.
Conclusion
Kevin Galligan’s KotlinConf 2019 talk demystifies Kotlin/Native’s concurrency model, offering a clear path for developers navigating its strict rules. By embracing thread confinement, leveraging workers, managing frozen state, and adopting multi-threaded coroutines, developers can build robust multiplatform applications. This talk is a must for Kotlin/Native enthusiasts seeking to master concurrency in modern mobile development.
Links
- Watch the full talk on YouTube
- Touchlab
- American Express
- KotlinConf
- JetBrains
- Kotlin Website
- Kotlin/Native Repository
Hashtags: #KevinGalligan #KotlinNative #Concurrency #Touchlab #JetBrains #Multiplatform