Recent Posts
Archives

Posts Tagged ‘Google’

PostHeaderIcon [DevoxxPL2022] Challenges Running Planet-Wide Computer: Efficiency • Jacek Bzdak, Beata Strack

Jacek Bzdak and Beata Strack, software engineers at Google Poland, delivered an engaging session at Devoxx Poland 2022, exploring the intricacies of optimizing Google’s planet-scale computing infrastructure. Their talk focused on achieving efficiency in a distributed system spanning global data centers, emphasizing resource utilization, auto-scaling, and operational strategies. By sharing insights from Google’s internal cloud and Autopilot system, Jacek and Beata provided a blueprint for enhancing service performance while navigating the complexities of large-scale computing.

Defining Efficiency in a Global Fleet

Beata opened by framing Google’s data centers as a singular “planet-wide computer,” where efficiency translates to minimizing operational costs—servers, CPU, memory, data centers, and electricity. Key metrics like fleet-wide utilization, CPU/RAM allocation, and growth rate serve as proxies for these costs, though they are imperfect, often masking quality issues like inflated memory usage. Beata stressed that efficiency begins at the service level, where individual jobs must optimize resource consumption, and extends to the fleet through an ecosystem that maximizes resource sharing. This dual approach ensures that savings at the micro level scale globally, a principle applicable even to smaller organizations.

Auto-Scaling: Balancing Utilization and Reliability

Jacek, a member of Google’s Autopilot team, delved into auto-scaling, a critical mechanism for achieving high utilization without compromising reliability. Autopilot’s vertical scaling adjusts resource limits (CPU/memory) for fixed replicas, while horizontal scaling modifies replica counts. Jacek presented data from an Autopilot paper, showing that auto-scaled services maintain memory slack below 20% for median cases, compared to over 60% for manually managed services. Crucially, automation reduces outage risks by dynamically adjusting limits, as demonstrated in a real-world case where Autopilot preempted a memory-induced crash. However, auto-scaling introduces complexity, particularly feedback loops, where overzealous caching or load shedding can destabilize resource allocation, requiring careful integration with application-specific metrics.

Java-Specific Challenges in Auto-Scaling

The talk transitioned to language-specific hurdles, with Jacek highlighting Java’s unique challenges in auto-scaling environments. Just-in-Time (JIT) compilation during application startup spikes CPU usage, complicating horizontal scaling decisions. Memory management poses further issues, as Java’s heap size is static, and out-of-memory errors may be masked by garbage collection (GC) thrashing, where excessive CPU is devoted to GC rather than request handling. To address this, Google sets static heap sizes and auto-scales non-heap memory, though Jacek envisioned a future where Java aligns with other languages, eliminating heap-specific configurations. These insights underscore the need for language-aware auto-scaling strategies in heterogeneous environments.

Operational Strategies for Resource Reclamation

Beata concluded by discussing operational techniques like overcommit and workload colocation to reclaim unused resources. Overcommit leverages the low probability of simultaneous resource spikes across unrelated services, allowing Google to pack more workloads onto machines. Colocating high-priority serving jobs with lower-priority batch workloads enables resource reclamation, with batch tasks evicted when serving jobs demand capacity. A 2015 experiment demonstrated significant machine savings through colocation, a concept influencing Kubernetes’ design. These strategies, combined with auto-scaling, create a robust framework for efficiency, though they demand rigorous isolation to prevent interference between workloads.

Links:

PostHeaderIcon [KotlinConf2019] Kotlin Coroutines: Mastering Cancellation and Exceptions with Florina Muntenescu & Manuel Vivo

Kotlin coroutines have revolutionized asynchronous programming on Android and other platforms, offering a way to write non-blocking code in a sequential style. However, as Florina Muntenescu and Manuel Vivo, both prominent Android Developer Experts then at Google, pointed out at KotlinConf 2019, the “happy path” is only part of the story. Their talk, “Coroutines! Gotta catch ’em all!” delved into the critical aspects of coroutine cancellation and exception handling, providing developers with the knowledge to build robust and resilient asynchronous applications.

Florina and Manuel highlighted a common scenario: coroutines work perfectly until an error occurs, a timeout is reached, or a coroutine needs to be cancelled. Understanding how to manage these situations—where to handle errors, how different scopes affect error propagation, and the impact of launch vs. async—is crucial for a good user experience and stable application behavior.

Structured Concurrency and Scope Management

A fundamental concept in Kotlin coroutines is structured concurrency, which ensures that coroutines operate within a defined scope, tying their lifecycle to that scope. Florina Muntenescu and Manuel Vivo emphasized the importance of choosing the right CoroutineScope for different situations. The scope dictates how coroutines are managed, particularly concerning cancellation and how exceptions are propagated.

They discussed:
* CoroutineScope: The basic building block for managing coroutines.
* Job and SupervisorJob: A Job in a coroutine’s context is responsible for its lifecycle. A key distinction is how they handle failures of child coroutines. A standard Job will cancel all its children and itself if one child fails. In contrast, a SupervisorJob allows a child coroutine to fail without cancelling its siblings or the supervisor job itself. This is critical for UI components or services where one failed task shouldn’t bring down unrelated operations. The advice often given is to use SupervisorJob when you want to isolate failures among children.
* Scope Hierarchy: How scopes can be nested and how cancellation or failure in one part of the hierarchy affects others. Understanding this is key to preventing unintended cancellations or unhandled exceptions.

Cancellation: Graceful Termination of Coroutines

Effective cancellation is vital for resource management and preventing memory leaks, especially in UI applications where operations might become irrelevant if the user navigates away. Florina and Manuel would have covered how coroutines support cooperative cancellation. This means that suspending functions in the kotlinx.coroutines library are generally cancellable; they check for cancellation requests and throw a CancellationException when one is detected.

Key points regarding cancellation included:
* Calling job.cancel() initiates the cancellation of a coroutine and its children.
* Coroutines must cooperate with cancellation by periodically checking isActive or using cancellable suspending functions. CPU-bound work in a loop that doesn’t check for cancellation might not stop as expected.
* CancellationException is considered a normal way for a coroutine to complete due to cancellation and is typically not logged as an unhandled error by default exception handlers.

Exception Handling: Catching Them All

Handling exceptions correctly in asynchronous code can be tricky. Florina and Manuel’s talk aimed to clarify how exceptions propagate in coroutines and how they can be caught.
They covered:
* launch vs. async:
* With launch, exceptions are treated like uncaught exceptions in a thread—they propagate up the job hierarchy. If not handled, they can crash the application (depending on the root scope’s context and CoroutineExceptionHandler).
* With async, exceptions are deferred. They are stored within the Deferred result and are only thrown when await() is called on that Deferred. This means if await() is never called, the exception might go unnoticed unless explicitly handled.
* CoroutineExceptionHandler: This context element can be installed in a CoroutineScope to act as a global handler for uncaught exceptions within coroutines started by launch in that scope. It allows for centralized error logging or recovery logic. They showed examples of how and where to install this handler effectively, for example, in the root coroutine or as a direct child of a SupervisorJob to catch exceptions from its children.
* try-catch blocks: Standard try-catch blocks can be used within a coroutine to handle exceptions locally, just like in synchronous code. This is often the preferred way to handle expected exceptions related to specific operations.

The speakers stressed that uncaught exceptions will always propagate, so it’s crucial to “catch ’em all” to avoid unexpected behavior or crashes. Their presentation aimed to provide clear patterns and best practices to ensure that developers could confidently manage both cancellation and exceptions, leading to more robust and user-friendly Kotlin applications.

Links:

PostHeaderIcon [KotlinConf2017] Kotlin Static Analysis with Android Lint

Lecturer

Tor Norbye is the technical lead for Android Studio at Google, where he has driven the development of numerous IDE features, including Android Lint, a static code analysis tool. With a deep background in software development and tooling, Tor is the primary author of Android Lint, which integrates with Android Studio, IntelliJ IDEA, and Gradle to enhance code quality. His expertise in static analysis and IDE development has made significant contributions to the Android ecosystem, supporting developers in building robust applications.

Abstract

Static code analysis is critical for ensuring the reliability and quality of Android applications. This article analyzes Tor Norbye’s presentation at KotlinConf 2017, which explores Android Lint’s support for Kotlin and its capabilities for custom lint checks. It examines the context of static analysis in Android development, the methodology of leveraging Lint’s Universal Abstract Syntax Tree (UAST) for Kotlin, the implementation of custom checks, and the implications for enhancing code quality. Tor’s insights highlight how Android Lint empowers developers to enforce best practices and maintain robust Kotlin-based applications.

Context of Static Analysis in Android

At KotlinConf 2017, Tor Norbye presented Android Lint as a cornerstone of code quality in Android development, particularly with the rise of Kotlin as a first-class language. Introduced in 2011, Android Lint is an open-source static analyzer integrated into Android Studio, IntelliJ IDEA, and Gradle, offering over 315 checks to identify bugs without executing code. As Kotlin gained traction in 2017, ensuring its compatibility with Lint became essential to support Android developers transitioning from Java. Tor’s presentation addressed this need, focusing on Lint’s ability to analyze Kotlin code and extend its functionality through custom checks.

The context of Tor’s talk reflects the challenges of maintaining code quality in dynamic, large-scale Android projects. Static analysis mitigates issues like null pointer exceptions, resource leaks, and API misuse, which are critical in mobile development where performance and reliability are paramount. By supporting Kotlin, Lint enables developers to leverage the language’s type-safe features while ensuring adherence to Android best practices, fostering a robust development ecosystem.

Methodology of Android Lint with Kotlin

Tor’s methodology centers on Android Lint’s use of the Universal Abstract Syntax Tree (UAST) to analyze Kotlin code. UAST provides a unified representation of code across Java and Kotlin, enabling Lint to apply checks consistently. Tor explained how Lint examines code statically, identifying potential bugs like incorrect API usage or performance issues without runtime execution. The tool’s philosophy prioritizes caution, surfacing potential issues even if they risk false positives, with suppression mechanisms to dismiss irrelevant warnings.

A key focus was custom lint checks, which allow developers to extend Lint’s functionality for library-specific rules. Tor demonstrated writing a custom check for Kotlin, leveraging UAST to inspect code structures and implement quickfixes that integrate with the IDE. For example, a check might enforce proper usage of a library’s API, offering automated corrections via code completion. This methodology ensures that developers can tailor Lint to project-specific needs, enhancing code quality and maintainability in Kotlin-based Android applications.

Implementing Custom Lint Checks

Implementing custom lint checks involves defining rules that analyze UAST nodes to detect issues and provide fixes. Tor showcased a practical example, creating a check to validate Kotlin code patterns, such as ensuring proper handling of nullable types. The process involves registering checks with Lint’s infrastructure, which loads them dynamically from libraries. These checks can inspect method calls, variable declarations, or other code constructs, flagging violations and suggesting corrections that appear in Android Studio’s UI.

Tor emphasized the importance of clean APIs for custom checks, noting plans to enhance Lint’s configurability with an options API. This would allow developers to customize check parameters (e.g., string patterns or ranges) directly from build.gradle or IDE interfaces, simplifying configuration. The methodology’s integration with Gradle and IntelliJ ensures seamless adoption, enabling developers to enforce project-specific standards without relying on external tools or complex setups.

Future Directions and Community Engagement

Tor outlined future enhancements for Android Lint, including improved support for Kotlin script files (.kts) in Gradle builds and advanced call graph analysis for whole-program insights. These improvements aim to address limitations in current checks, such as incomplete Gradle file support, and enhance Lint’s ability to perform comprehensive static analysis. Plans to transition from Java-centric APIs to UAST-focused ones promise a more stable, Kotlin-friendly interface, reducing compatibility issues and simplifying check development.

Community engagement is a cornerstone of Lint’s evolution. Tor encouraged developers to contribute checks to the open-source project, sharing benefits with the broader Android community. The emphasis on community-driven development ensures that Lint evolves to meet real-world needs, from small-scale apps to enterprise projects. By fostering collaboration, Tor’s vision positions Lint as a vital tool for maintaining code quality in Kotlin’s growing ecosystem.

Conclusion

Tor Norbye’s presentation at KotlinConf 2017 highlighted Android Lint’s pivotal role in ensuring code quality for Kotlin-based Android applications. By leveraging UAST for static analysis and supporting custom lint checks, Lint empowers developers to enforce best practices and adapt to project-specific requirements. The methodology’s integration with Android Studio and Gradle, coupled with plans for enhanced configurability and community contributions, strengthens Kotlin’s appeal in Android development. As Kotlin continues to shape the Android ecosystem, Lint’s innovations ensure robust, reliable applications, reinforcing its importance in modern software development.

Links

PostHeaderIcon [KotlinConf2017] Understand Every Line of Your Codebase

Lecturer

Victoria Gonda is a software developer at Collective Idea, specializing in Android and web applications with a focus on improving user experiences through technology. With a background in Computer Science and Dance, Victoria combines technical expertise with creative problem-solving, contributing to projects that enhance accessibility and engagement. Boris Farber is a Senior Partner Engineer at Google, focusing on Android binary analysis and performance optimization. As the lead of ClassyShark, an open-source tool for browsing Android and Java executables, Boris brings deep expertise in understanding compiled code.

Abstract

Understanding the compiled output of Kotlin code is essential for optimizing performance and debugging complex applications. This article analyzes Victoria Gonda and Boris Farber’s presentation at KotlinConf 2017, which explores how Kotlin and Java code compiles to class files and introduces tools for inspecting compiled code. It examines the context of Kotlin’s compilation pipeline, the methodology of analyzing bytecode, the use of inspection tools like ClassyShark, and the implications for developers seeking deeper insights into their codebases. The analysis highlights how these tools empower developers to make informed optimization decisions.

Context of Kotlin’s Compilation Pipeline

At KotlinConf 2017, Victoria Gonda and Boris Farber addressed the challenge of understanding Kotlin’s compiled output, a critical aspect for developers transitioning from Java or optimizing performance-critical applications. Kotlin’s concise and expressive syntax, while enhancing productivity, raises questions about its compiled form, particularly when compared to Java. As Kotlin gained traction in Android and server-side development, developers needed tools to inspect how their code translates to bytecode, ensuring performance and compatibility with JVM-based systems.

Victoria and Boris’s presentation provided a timely exploration of Kotlin’s build pipeline, focusing on its similarities and differences with Java. By demystifying the compilation process, they aimed to equip developers with the knowledge to analyze and optimize their code. The context of their talk reflects Kotlin’s growing adoption and the need for transparency in how its features, such as lambdas and inline functions, impact compiled output, particularly in performance-sensitive scenarios like Android’s drawing loops or database operations.

Methodology of Bytecode Analysis

The methodology presented by Victoria and Boris centers on dissecting Kotlin’s compilation to class files, using tools like ClassyShark to inspect bytecode. They explained how Kotlin’s compiler transforms high-level constructs, such as lambdas and inline functions, into JVM-compatible bytecode. Inline functions, for instance, copy their code directly into the call site, reducing overhead but potentially increasing code size. The presenters demonstrated decompiling class files to reveal metadata used by the Kotlin runtime, such as type information for generics, providing insights into how Kotlin maintains type safety at runtime.

ClassyShark, led by Boris, serves as a key tool for this analysis, allowing developers to browse Android and Java executables and understand their structure. The methodology involves writing Kotlin code, compiling it, and inspecting the resulting class files to identify performance implications, such as method count increases from lambdas. Victoria and Boris emphasized a pragmatic approach: analyze code before optimizing, ensuring that performance tweaks target actual bottlenecks rather than speculative issues, particularly in mission-critical contexts like Android rendering.

Practical Applications and Optimization

The practical applications of bytecode analysis lie in optimizing performance and debugging complex issues. Victoria and Boris showcased how tools like ClassyShark reveal the impact of Kotlin’s features, such as inline functions adding methods to class files. For Android developers, this is critical, as method count limits can affect app size and performance. By inspecting decompiled classes, developers can identify unnecessary object allocations or inefficient constructs, optimizing code for scenarios like drawing loops or database operations where performance is paramount.

The presenters also addressed the trade-offs of inline functions, noting that while they reduce call overhead, excessive use can inflate code size. Their methodology encourages developers to test performance impacts before applying optimizations, using tools to measure method counts and object allocations. This approach ensures that optimizations are data-driven, avoiding premature changes that may not yield significant benefits. The open-source nature of ClassyShark further enables developers to customize their analysis, tailoring inspections to specific project needs.

Implications for Developers

The insights from Victoria and Boris’s presentation have significant implications for Kotlin developers. Understanding the compiled output of Kotlin code empowers developers to make informed decisions about performance and compatibility, particularly in Android development where resource constraints are critical. Tools like ClassyShark democratize bytecode analysis, enabling developers to debug issues that arise from complex features like generics or lambdas. This transparency fosters confidence in adopting Kotlin for performance-sensitive applications, bridging the gap between its high-level syntax and low-level execution.

For the broader Kotlin ecosystem, the presentation underscores the importance of tooling in supporting the language’s growth. By providing accessible methods to inspect and optimize code, Victoria and Boris contribute to a culture of informed development, encouraging developers to explore Kotlin’s internals without fear of hidden costs. Their emphasis on community engagement, through questions and open-source tools, ensures that these insights evolve with developer feedback, strengthening Kotlin’s position as a reliable, performance-oriented language.

Conclusion

Victoria Gonda and Boris Farber’s presentation at KotlinConf 2017 provided a comprehensive guide to understanding Kotlin’s compiled output, leveraging tools like ClassyShark to demystify the build pipeline. By analyzing bytecode and addressing optimization trade-offs, they empowered developers to make data-driven decisions for performance-critical applications. The methodology’s focus on practical analysis and accessible tooling enhances Kotlin’s appeal, particularly for Android developers navigating resource constraints. As Kotlin’s adoption grows, such insights ensure that developers can harness its expressive power while maintaining control over performance and compatibility.

Links

PostHeaderIcon [DevoxxUS2017] Creating a Connected Home by Kevin and Andy Nilson

At DevoxxUS2017, Kevin Nilson, a Java Champion and lead of the Chromecast Technical Solutions Engineer team at Google, joined forces with his 12-year-old son, Andy Nilson, to present a captivating live coding demo on building a connected home. Their session showcased how voice and mobile controls can interact with smart devices, leveraging platforms like Google Home. Kevin and Andy’s collaborative approach highlighted the accessibility of IoT development, blending technical expertise with educational outreach. This post examines the key themes of their presentation, emphasizing the fusion of innovation and learning.

Building a Smart Home Ecosystem

Kevin Nilson and Andy Nilson began by demonstrating a connected home setup, where lights, fans, and music systems respond to voice commands via Google Home. Kevin explained the architecture, integrating devices like Philips Hue and Nest thermostats through APIs. Andy, showcasing his coding skills, contributed to the demo by writing scripts to control devices, illustrating how accessible IoT programming can be, even for young developers. Their work reflected Google’s commitment to seamless smart home integration.

Voice Control and Device Integration

The duo delved into voice-activated controls, showing how Google Home processes commands like “turn on the lights.” Kevin highlighted the use of OAuth for secure device linking, ensuring commands are tied to user accounts. Andy demonstrated triggering actions, such as activating a fan, by coding simple integrations. Their live demo, despite network challenges, showcased practical IoT applications, emphasizing ease of use and real-time interaction with smart devices.

Inspiring the Next Generation

Kevin and Andy emphasized the educational potential of their project, drawing from their involvement in Devoxx4Kids and JavaOne Kids Day. Andy’s participation, rooted in his experience coding since childhood, inspired attendees to engage young learners in technology. Kevin shared resources for learning IoT, recommending starting with specific problems and exploring community solutions, such as hackathon projects like the Febreze air freshener integration, to spark creativity.

Fostering Community and Collaboration

Concluding, Kevin encouraged developers to explore IoT through open-source communities and hackathons, sharing his experience as a Silicon Valley JUG leader. Andy’s enthusiasm for coding underscored the session’s goal of making technology accessible. Their call to action invited attendees to contribute to smart home projects, leveraging platforms like Google Home to build innovative, user-friendly solutions for connected living.

Links:

PostHeaderIcon [DevoxxBE2013] Riddle Me This, Android Puzzlers

Stephan Linzner and Richard Hyndman, Google Android Developer Advocates, unravel enigmatic Android behaviors through interactive puzzles. Stephan, an automation aficionado and runner, teams with Richard, a 12-year mobile veteran from startups to operators, to probe component lifecycles, UI quirks, and KitKat novelties. Their session, blending polls and demos, spotlights content providers’ primacy, ViewStub pitfalls, and screen recording tools, arming developers with debugging savvy.

Android’s intricacies, they reveal, demand vigilance: from process spawning to WebView debugging. Live polls engage the audience, transforming head-scratchers into teachable moments.

Component Creation Order and Lifecycle

Stephan kicks off with a poll: which component initializes first post-process spawn? Hands favor activities, but content providers lead—crucial for data bootstrapping.

Richard demos service lifecycles, warning against onCreate leaks; broadcasts’ unregistered crashes underscore registration discipline.

UI Rendering Quirks and Optimizations

ViewStub inflation puzzles Stephan: pre-inflate for speed, but beware null children post-inflation. Richard explores ListView recycling, ensuring adapters populate recycled views correctly to avoid visual glitches.

These gotchas, they stress, demand profiler scrutiny for fluid UIs.

KitKat Innovations and Debugging Aids

KitKat’s screen recording, Richard unveils, captures high-res videos sans root—ideal for demos or Play Store assets. Stephan spotlights WebView debugging: Chrome DevTools inspect remote views, editing CSS live.

Monkey tool’s seeded crashes aid reproducible testing, simulating user chaos.

Interactive Polls and Community Insights

Polls gauge familiarity with overscan modes and transition animations, fostering engagement. The duo fields queries on SurfaceView security and WebView copies, clarifying limitations.

This collaborative format, they conclude, equips developers to conquer Android’s riddles.

Links:

PostHeaderIcon [DevoxxBE2012] Re-imagining the Browser with AngularJS

Misko Hevery and Igor Minar, Google engineers and AngularJS co-leads, re-envisioned client-side development. Misko, an Agile coach with open-source contributions, partnered with Igor, focused on developer tools, to showcase AngularJS’s approach to simplifying web apps.

They posited extending the browser with declarative HTML and JavaScript, reducing code while enhancing readability. AngularJS bridges to future standards like web components and model-driven views.

Misko demonstrated data binding, where models sync with views automatically, eliminating manual DOM manipulation. Directives extend HTML, creating custom elements for reusability.

Igor highlighted dependency injection for modularity, and services for shared logic. Routing enables single-page apps, with controllers managing scopes.

They emphasized testability, with built-in mocking and end-to-end testing.

Declarative UI and Data Binding

Misko illustrated two-way binding: changes in models update views, and vice versa, without boilerplate. This declarative paradigm contrasts imperative jQuery-style coding.

Directives like ng-repeat generate lists dynamically, while filters format data.

Modularity and Dependency Management

Igor explained modules encapsulating functionality, injected via DI. This promotes clean, testable code.

Services, factories, and providers offer flexible creation patterns.

Routing and Application Structure

NgRoute handles navigation, loading templates and controllers. Scopes isolate data, with inheritance for hierarchy.

Testing and Future Alignment

Angular’s design facilitates unit and e2e tests, using Karma and Protractor.

They previewed alignment with web components, where directives become custom tags.

In Q&A, they compared to Knockout.js, noting Angular’s framework scope versus library focus.

Misko and Igor’s presentation framed AngularJS as transformative, anticipating browser evolutions while delivering immediate productivity.

Links:

PostHeaderIcon [DevoxxBE2012] The Chrome Dev Tools Can Do THAT

Ilya Grigorik, a Google web performance engineer and developer advocate, unveiled advanced capabilities of Chrome Developer Tools. Ilya, focused on accelerating the web, overwhelmed attendees with tips, dividing into inspection/debugging and performance analysis.

He encouraged hands-on exploration via online slides, emphasizing tools’ instrumentation for pinpointing bottlenecks.

Starting with basics, Ilya showed inspecting elements, modifying DOM/CSS live, and using console for JavaScript evaluation.

Advanced features included remote debugging for mobile, connecting devices to desktops for inspection.

Inspection and Debugging Essentials

Ilya demonstrated breakpoints on DOM changes, XHR requests, and events, pausing execution for analysis.

Color pickers, shadow DOM inspection, and computed styles aid UI debugging.

Console utilities like $0 for selected elements, querySelector, and table formatting enhance interactivity.

JavaScript Profiling and Optimization

CPU profilers capture call stacks, revealing hot spots. Ilya profiled loops, identifying inefficiencies.

Heap snapshots detect memory leaks by comparing allocations.

Source maps map minified code to originals, with pretty-printing for readability.

Network and Resource Analysis

Network panel details requests, with filters and timelines. Ilya explained columns like status, size, showing compression benefits.

WebSocket and SPDY inspectors provide low-level insights.

HAR exports enable sharing traces.

Timeline and Rendering Insights

Timeline records events, offering frame-by-frame analysis of layouts, paints.

Ilya used it to optimize animations, enabling GPU acceleration.

CSS selectors profile identifies slow rules.

Auditing and Best Practices

Audits suggest optimizations like minification, unused CSS removal.

Extensions customize tools further.

Low-Level Tracing and Customization

Chrome Tracing visualizes browser internals, instrumentable with console.time for custom metrics.

Ilya’s session equipped developers with powerful diagnostics for performant, debuggable applications.

Links:

PostHeaderIcon [DevoxxFR2013] Regular or Decaffeinated? Java’s Future in the Cloud

Lecturer

Alexis Moussine-Pouchkine, a veteran of Sun Microsystems, currently serves as a Developer Relations lead at Google in Paris, assisting developers in achieving success. With over a decade at Sun and nearly two years at Oracle, he brings extensive experience in Java ecosystems and cloud technologies.

Abstract

Alexis Moussine-Pouchkine’s presentation examines Java’s evolution and its potential trajectory in cloud computing. Reflecting on historical shifts in technology, he critiques current limitations and advocates for advancements like multi-tenancy and resource management to ensure Java’s relevance. Through industry examples and forward-looking analysis, the talk underscores the need for adaptation to maintain Java’s position amid resource rationalization and emerging paradigms.

Java’s Maturation and the Cloud Imperative

Moussine-Pouchkine opens by recounting his transition from Sun Microsystems to Oracle and then Google, highlighting how each company has shaped computing history. At Sun, innovation abounded but market fit was inconsistent; Oracle emphasized acquisitions over novelty, straining community ties; Google prioritizes engineers, fostering rapid development.

He likens Java’s current state to emerging from adolescence, facing challenges in cloud environments where resource optimization is paramount. Drawing from his engineering school days with strict quotas on compilation and connection time, Alexis contrasts this with Java’s initial promise of freedom and flexibility. Early experiences with Linux provided boundless experimentation, mirroring Java’s liberating potential in 1997.

The speaker invokes historical predictions: IBM’s CEO allegedly foresaw a market for only five computers in 1943, possibly prescient regarding cloud providers. Bill Gates’ 640K memory quip and Greg Papadopoulos’ 2003 vision of five to seven massive global computers underscore consolidation trends. Papadopoulos envisioned entities like Google, eBay, Salesforce, Microsoft, Amazon, and a Chinese cloud, a perspective less radical today given web evolution.

Java’s centrality in tomorrow’s cloud is questioned. While present in many offerings, most implementations remain prototypes, circumventing Java’s constraints. The cloud demands shared resources and concentration of expertise, yet Java’s future here is uncertain, risking obsolescence like COBOL.

Challenges and Necessary Evolutions for Java in Multi-Tenant Environments

A core issue is Java’s adaptation to multi-tenancy, where multiple applications share a JVM without interference. Current JVMs lack robust isolation, leading to inefficiencies in cloud settings. Moussine-Pouchkine notes Java’s success in Android and Chrome, where processes are segregated, but enterprise demands shared instances for cost savings.

He critiques the stalled JSR-284 for resource management, essential for quotas and usage-based billing. Without these, Java lags in cloud viability. Examples like Google’s App Engine illustrate Java’s limitations: no threads, file system restrictions, and 30-second request limits, forcing workarounds.

Commercial solutions emerge: Waratek’s hypervisor on HotSpot, IBM’s J9 VM, and SAP’s container enable multi-tenancy. Yet, quotas remain crucial for responsible computing, akin to not overindulging at a buffet to ensure sustainability.

Java 9 priorities include modularity (Jigsaw), potentially aiding resource management. Cloud Foundry’s varying memory allocations by language highlight Java’s inefficiencies. Moussine-Pouchkine urges a “slider” for JVM scaling, from minimal to robust, without API fractures.

The community, pioneers in agile practices, continuous integration, and dependency management, must embrace modularity and quotas. Java 7 introduced dynamic languages; Java 8 tackles multicore with lambdas. Recent Oracle slides affirm multi-tenancy and resource management in Java 9 and beyond.

Implications for Sustainable and Credible Informatics

Moussine-Pouchkine advocates responsible informatics: quotas foster predictability, countering perceptions of IT as imprecise and costly. Developers, like artisans, must steward tools and design thoughtfully. Over-reliance on libraries (90% bloat) signals accumulated technical debt.

Quotas enhance credibility, enabling commitments and superior delivery. Java’s adaptive history positions it well, provided the community envisions it “caffeinated” – vibrant and adult – rather than “decaffeinated” and stagnant.

In essence, Java must address multi-tenancy and resources to thrive in consolidated clouds, avoiding the fate of outdated technologies.

Relevant Links and Hashtags

Links:

PostHeaderIcon Windows Mobile 6.1: which browser?

Here is a short comparative of webbrowsers available on Windows Mobile 6.1. I used them on a Acer X960 on French VirginMobile network.

Browser Pros Cons WebSite
Internet Explorer 5
  • already installed on devices
  • slow
  • no tabs
  • no Flash
  • GMail doesn’t work
  • Micro$oft!
Mozilla Fennec 1.0a1
  • open source
  • tabs
  • very slow
  • very heavy in memory
  • no Flash
  • GMail doesn’t work
http://www.mozilla.org/projects/fennec/1.0a1/releasenotes/
Opera Mobile 10
  • tabs
  • fluidity
  • speed
  • no Flash
  • heavy in memory
  • GMail doesn’t work
  • not open-source
http://www.opera.com/
SkyFire 1.5
  • GMail works
  • Flash supported
  • speed
  • no tabs
  • confidentiality
  • not open-source
http://get.skyfire.com

As a conclusion, what do I do?

  • In most cases, I use Opera, for its speedness and tabs.
  • When I need watch a video
    • my Acer X960 displays YouTube videos in a specific player
    • on other websites, I use SkyFire.
  • For Google applications (GMail, Reader, Docs, etc.), I use SkyFire, too.