Archive for the ‘en-US’ Category
[DotSecurity2017] Verifiable Lotteries
In realms where randomness reigns—visas’ vagaries, audits’ allocations, tournaments’ tabulations—authorities’ assertions of equity often echo empty, bereft of corroboration. Joseph Bonneau, a cryptographer and researcher at Stanford’s Applied Crypto Group, demystified this domain at dotSecurity 2017, championing verifiable randomness as algorithmic accountability’s anchor. Formerly of Google and the Electronic Frontier Foundation, Joseph’s journey—Bitcoin’s blockchain to privacy’s precincts—culminates in constructs that compel proof from the capricious, from dice’s clatter to cryptographic commitments.
Joseph’s journey juxtaposed physical proofs’ pitfalls: lottery balls’ bias, dice’s deceit—transparency’s theater, yet tampering’s temptation. Cryptography’s counter: commitments’ covenant—hash’s hideaway, revealing randomness post-participation. Verifiable delay functions (VDFs) vex velocity: computations’ chronology, proofs’ promptness—adversaries’ acceleration averted. Joseph’s jewel: lottery’s ledger—participants’ pledges, commitments’ concatenation, hash’s harvest yielding winners’ wreath.
Applications amplify: H1B visas’ vagaries verified, elections’ audits authenticated—overbooked flights’ farewells fair. Joseph’s jibe: FIFA’s football fiascoes, fans’ fury—verifiability’s vindication. Blockchain’s boon: Bitcoin’s beacons, stock quotes’ surrogates—delay’s defense against traders’ tempo.
Challenges chime: VDFs’ verification velocity, parallelism’s peril—research’s realm ripe. Joseph’s rallying cry: demand documentation—dashlane’s draws, FIFA’s fixtures, governments’ gambits—cryptography’s capability calls.
Randomness’ Riddles and Physical Pitfalls
Joseph juxtaposed balls’ bias, dice’s deceit—commitments’ cure, delay’s deterrent.
Constructs’ Craft and Applications’ Ambit
Hash’s haven, VDFs’ vigil—lotteries’ ledger, visas’ verity. Blockchain’s bastion, stocks’ surrogate.
Links:
[DevoxxUS2017] Java EE 8: Adapting to Cloud and Microservices
At DevoxxUS2017, Linda De Michiel, a pivotal figure in the Java EE architecture team and Specification Lead for the Java EE Platform at Oracle, delivered a comprehensive overview of Java EE 8’s development. With her extensive experience since 1997, Linda highlighted the platform’s evolution to embrace cloud computing and microservices, aligning with modern industry trends. Her presentation detailed updates to existing Java Specification Requests (JSRs) and introduced new ones, while also previewing plans for Java EE 9. This post explores the key themes of Linda’s talk, emphasizing Java EE 8’s role in modern enterprise development.
Evolution from Java EE 7
Linda began by reflecting on Java EE 7, which focused on HTML5 support, modernized web-tier APIs, and simplified development through Context and Dependency Injection (CDI). Building on this foundation, Java EE 8 shifts toward cloud-native and microservices architectures. Linda noted that emerging trends, such as containerized deployments and distributed systems, influenced the platform’s direction. By enhancing CDI and introducing new APIs, Java EE 8 aims to streamline development for scalable, cloud-based applications, ensuring developers can build robust systems that meet contemporary demands.
Enhancements to Core JSRs
A significant portion of Linda’s talk focused on updates to existing JSRs, including CDI 2.0, JSON Binding (JSON-B), JSON Processing (JSON-P), and JAX-RS. She announced that CDI 2.0 had unanimously passed its public review ballot, a milestone for the expert group. JSON-B and JSON-P, crucial for data interchange in modern applications, have reached proposed final draft stages, while JAX-RS enhances RESTful services with reactive programming support. Linda highlighted the open-source nature of these implementations, such as GlassFish and Jersey, encouraging community contributions to refine these APIs for enterprise use.
New APIs for Modern Challenges
Java EE 8 introduces new JSRs to address cloud and microservices requirements, notably the Security API. Linda discussed its early draft review, which aims to standardize authentication and authorization across distributed systems. Servlet and JSF updates are also progressing, with JSF nearing final release. These APIs enable developers to build secure, scalable applications suited for microservices architectures. Linda emphasized the platform’s aggressive timeline for a summer release, underscoring the community’s commitment to delivering production-ready solutions that align with industry shifts toward cloud and container technologies.
Community Engagement and Future Directions
Linda stressed the importance of community feedback, directing developers to the Java EE specification project on java.net for JSR details and user groups. She highlighted the Adopt-a-JSR program, led by advocates like Heather VanCura, as a channel for aggregating feedback to expert groups. Looking ahead, Linda briefly outlined Java EE 9’s focus on further cloud integration and modularity. By inviting contributions through open-source platforms like GlassFish, Linda encouraged developers to shape the platform’s future, ensuring Java EE remains relevant in a rapidly evolving technological landscape.
Links:
[DevoxxUS2017] Eclipse OMR: A Modern, Open-Source Toolkit for Building Language Runtimes by Daryl Maier
At DevoxxUS2017, Daryl Maier, a Senior Software Developer at IBM, introduced Eclipse OMR, an open-source toolkit for building high-performance language runtimes. With two decades of experience in compiler development, Daryl shared how OMR repurposes components of IBM’s J9 Java Virtual Machine to support diverse dynamic languages without imposing Java semantics. His session highlighted OMR’s potential to democratize runtime technology, fostering innovation across language ecosystems. This post explores the core themes of Daryl’s presentation, emphasizing OMR’s role in advancing runtime development.
Unlocking JVM Technology with OMR
Daryl Maier opened by detailing the Eclipse OMR project, which extracts core components of the J9 JVM, such as its compiler and garbage collector, for broader use. Unlike building languages atop Java, OMR provides modular, high-performance tools for creating custom runtimes. Daryl’s examples showcased OMR’s flexibility in supporting languages beyond Java, drawing from his work at IBM’s Canada Lab to illustrate its potential for diverse applications.
Compiler and Runtime Innovations
Transitioning to technical specifics, Daryl explored OMR’s compiler technology, designed for just-in-time (JIT) compilation in dynamic environments. He contrasted OMR with LLVM, noting its lightweight footprint and optimization for runtime performance. Daryl highlighted OMR’s garbage collection and code generation capabilities, which enable efficient, scalable runtimes. His insights underscored OMR’s suitability for dynamic languages, offering developers robust tools without the overhead of traditional compilers.
Active Development and Use Cases
Daryl discussed active OMR projects, including integrations with existing runtimes to enhance debuggability and performance. He referenced a colleague’s upcoming demo on OMR’s tooling interfaces, illustrating practical applications. Drawing from IBM’s extensive runtime expertise, Daryl showcased how OMR supports innovative use cases, from scripting languages to domain-specific runtimes, encouraging developers to leverage its modular architecture.
Engaging the Developer Community
Concluding, Daryl invited developers to contribute to Eclipse OMR, emphasizing its open-source ethos. He highlighted collaboration opportunities, noting contact points with project co-leads Mark and Charlie. Daryl’s call to action, rooted in IBM’s commitment to open-source innovation, encouraged attendees to explore OMR’s GitHub repository and participate in shaping the future of language runtimes.
Links:
[KotlinConf2017] How to Build a React App in Kotlin
Lecturer
Dave Ford is an independent software developer and trainer with extensive experience in JVM-based languages and JavaScript. Having worked with both technologies since their inception, Dave brings a deep understanding of cross-platform development. His recent project of porting a React application to Kotlin showcases his expertise in leveraging Kotlin’s JavaScript interoperability and type-safe features. As a trainer, Dave is dedicated to sharing practical insights, helping developers navigate modern frameworks and tools to build robust web applications.
Abstract
The integration of Kotlin with React offers a powerful approach to web development, combining Kotlin’s type-safe, concise syntax with React’s component-based architecture. This article analyzes Dave Ford’s presentation at KotlinConf 2017, which explores building a React application using Kotlin/JS. It examines the context of Kotlin’s JavaScript interoperability, the methodology for creating type-safe React components, the use of Kotlin’s DSL capabilities, and the challenges encountered. The analysis highlights the implications of this approach for web developers, emphasizing productivity gains and the potential to streamline front-end development within the Kotlin ecosystem.
Context of Kotlin and React Integration
At KotlinConf 2017, Dave Ford addressed the growing interest in using Kotlin for web development, particularly through its JavaScript compilation capabilities. Kotlin/JS allows developers to write type-safe code that compiles to JavaScript, enabling integration with popular frameworks like React. Dave’s presentation, informed by his experience porting a React app to Kotlin, targeted an audience familiar with Kotlin but largely new to Kotlin/JS and React. The context of the presentation reflects the increasing demand for modern, type-safe alternatives to JavaScript, which often suffers from runtime errors and complex tooling.
React, a widely-used JavaScript library, excels in building dynamic, component-based web interfaces. However, its reliance on JavaScript’s dynamic typing can lead to errors that Kotlin’s static type system mitigates. Dave’s talk aimed to bridge these ecosystems, demonstrating how Kotlin’s interoperability with JavaScript and its IDE support, particularly through JetBrains’ tools, enhances developer productivity. The presentation’s live coding approach provided practical insights, making the integration accessible to developers seeking to leverage Kotlin’s strengths in front-end development.
Methodology for Type-Safe React Components
Dave’s methodology centered on using Kotlin’s JavaScript interop features to create type-safe React components. He demonstrated how Kotlin/JS interfaces with React’s APIs, allowing developers to define components with compile-time type checking. This approach reduces runtime errors common in JavaScript-based React development. By leveraging Kotlin’s type system, developers can ensure that props and state are correctly typed, improving code reliability and maintainability.
A key innovation was the use of Kotlin’s DSL capabilities to simplify React programming. Dave showcased how Kotlin’s type-safe builders create a declarative syntax for component hierarchies, making code more readable and concise compared to JavaScript’s verbose patterns. For example, he implemented a game application, passing event handlers (e.g., deal, hit, stay) through components to manage state changes. This approach, using lambda expressions and anonymous objects, allowed asynchronous state updates in a React-like manner, demonstrating Kotlin’s ability to streamline complex front-end logic.
Challenges and Lessons Learned
Porting a React app to Kotlin presented several challenges, which Dave candidly shared. One significant obstacle was managing state in a React application without direct access to game state from child components. To address this, Dave passed event handlers from parent to child components, a common React pattern, but implemented them using Kotlin’s type-safe constructs. This required defining interfaces for event handlers and overriding functions to update state asynchronously, highlighting the need for careful design to maintain React’s unidirectional data flow.
Another challenge was the learning curve for developers new to Kotlin/JS. Dave noted that while Kotlin’s IDE support simplifies development, familiarity with React’s ecosystem and JavaScript tooling (e.g., Create React App) is necessary. His live demo encountered minor issues, such as the need to refresh the application, underscoring the importance of robust tooling integration. These lessons emphasized the value of Kotlin’s type safety and IDE support in overcoming JavaScript’s limitations, while also highlighting areas for improvement in Kotlin/JS workflows.
Implications for Web Development
The integration of Kotlin and React, as demonstrated by Dave, has significant implications for web development. By combining Kotlin’s type safety with React’s component model, developers can create robust, maintainable web applications with fewer runtime errors. The use of DSLs enhances productivity, allowing developers to write concise, expressive code that aligns with React’s declarative paradigm. This approach is particularly valuable for teams transitioning from JVM-based Kotlin (e.g., Android or server-side) to web development, as it leverages familiar syntax and tools.
For the broader ecosystem, Kotlin/JS expands Kotlin’s reach beyond traditional JVM applications, challenging JavaScript’s dominance in front-end development. The ability to compile to JavaScript while maintaining type safety positions Kotlin as a compelling alternative for building modern web applications. Dave’s emphasis on community engagement, encouraging developers to explore Kotlin/JS, suggests a growing ecosystem that could influence web development practices, particularly for projects requiring high reliability and scalability.
Conclusion
Dave Ford’s presentation at KotlinConf 2017 illuminated the potential of Kotlin/JS to transform React-based web development. By leveraging type-safe components and DSL capabilities, Kotlin offers a productive, reliable alternative to JavaScript, addressing common pain points in front-end development. Despite challenges like state management and tooling integration, the approach demonstrates significant promise for developers seeking to unify their Kotlin expertise across platforms. As Kotlin/JS matures, its impact on web development is likely to grow, fostering a more robust and developer-friendly ecosystem.
Links
[DevoxxUS2017] Eclipse Che by Tyler Jewell
At DevoxxUS2017, Tyler Jewell, CEO of Codenvy and project lead for Eclipse Che, delivered a compelling session on the shift from localhost to cloud-based development. Highlighting Eclipse Che as a next-generation IDE and workspace server, Tyler discussed how it streamlines team collaboration and agile workflows. With contributions from industry leaders like Red Hat and Microsoft, Che has rapidly gained traction. This post explores the key themes of Tyler’s presentation, focusing on the future of cloud development.
The Rise of Cloud Development
Tyler Jewell began by outlining market forces driving the adoption of cloud development, such as the need for rapid onboarding and consistent environments. He contrasted traditional localhost setups with cloud-based workflows, emphasizing how Eclipse Che enables one-click environment creation. Tyler’s insights, drawn from his role at Codenvy, highlighted Che’s ability to reduce setup time, allowing teams to focus on coding rather than configuration.
Eclipse Che’s Workspace Innovation
Delving into technical details, Tyler showcased Che’s workspace server, which supports reproducible environments through containerized runtimes. Unlike Vagrant VMs, Che workspaces offer lightweight, scalable solutions, integrating seamlessly with Docker. He demonstrated how Che’s architecture supports distributed teams, enabling collaboration across geographies. Tyler’s live demo illustrated creating and managing workspaces, underscoring Che’s role in modernizing development pipelines.
Community Contributions and Roadmap
Tyler emphasized the vibrant Eclipse Che community, with nearly 100 contributors from companies like IBM and Samsung. He discussed ongoing efforts to enhance language server integration, citing the Language Server Protocol’s potential for dynamic tool installation. Tyler shared Che’s roadmap, focusing on distributed workspaces and team-centric features, inviting developers to contribute to its open-source ecosystem.
Balancing IT Control and Developer Freedom
Concluding, Tyler addressed the tension between IT control and developer autonomy, noting how Che balances root access with governance. He highlighted its integration with agile methodologies, enabling faster iterations and improved collaboration. Tyler’s vision for Che, rooted in his experience at Toba Capital, positioned it as a transformative platform for cloud-native development, encouraging attendees to explore its capabilities.
Links:
[DevoxxFR] How to be a Tech Lead in an XXL Pizza Team Without Drowning
The role of a Tech Lead is multifaceted, requiring a blend of technical expertise, mentorship, and facilitation skills. However, these responsibilities become significantly more challenging when leading a large team, humorously dubbed an “XXL pizza team,” potentially comprising fifteen or more individuals, with a substantial number of developers. Damien Beaufils shared his valuable one-year retrospective on navigating this complex role within such a large and diverse team, offering practical insights on how to effectively lead, continue contributing technically, and avoid being overwhelmed.
Damien’s experience was rooted in leading a team working on a public-facing website, notable for its heterogeneity. The team was mixed in terms of skill sets, gender, and composition (combining client and vendor personnel), adding layers of complexity to the leadership challenge.
Balancing Technical Contribution and Leadership
A key tension for many Tech Leads is balancing hands-on coding with leadership duties. Damien addressed this directly, emphasizing that while staying connected to the code is important for credibility and understanding, the primary focus must shift towards enabling the team. He detailed practices put in place to foster collective ownership of the codebase and enhance overall product quality. These included encouraging pair programming, implementing robust code review processes, and establishing clear coding standards.
The goal was to distribute technical knowledge and responsibility across the team rather than concentrating it solely with the Tech Lead. By empowering team members to take ownership and contribute to quality initiatives, Damien found that the team’s overall capability and autonomy increased, allowing him to focus more on strategic technical guidance and facilitation.
Fostering Learning, Progression, and Autonomy
Damien highlighted several successful strategies employed to promote learning, progression, and autonomy within the XXL team. These successes were not achieved by acting as a “super-hero” dictating solutions but through deliberate efforts to facilitate growth. Initiatives such as organizing internal workshops, encouraging knowledge sharing sessions, and providing opportunities for developers to explore new technologies contributed to a culture of continuous learning.
He stressed the importance of the Tech Lead acting as a coach, guiding individuals and the team towards self-improvement and problem-solving. By fostering an environment where team members felt empowered to make technical decisions and learn from both successes and failures, Damien helped build a more resilient and autonomous team. This shift from relying on a single point of technical expertise to distributing knowledge and capability was crucial for managing the scale and diversity of the team effectively.
Challenges and Lessons Learned
Damien was also candid about the problems encountered and the strategies that proved less effective. Leading a large, mixed team inevitably presents communication challenges, potential conflicts, and the difficulty of ensuring consistent application of standards. He discussed the importance of clear communication channels, active listening, and addressing issues proactively.
One crucial lesson learned was the need for clearly defined, measurable indicators to track progress in areas like code quality, team efficiency, and technical debt reduction. Without objective metrics, it’s challenging to assess the effectiveness of implemented practices and demonstrate improvement. Damien concluded that while there’s no magic formula for leading an XXL team, a pragmatic approach focused on empowering the team, fostering a culture of continuous improvement, and using data to inform decisions is essential for success without becoming overwhelmed.
Links:
- Damien Beaufils’s Twitter
- Damien Beaufils’s LinkedIn
- Video URL: https://www.youtube.com/watch?v=eEUfsjYj3rw
Hashtags: #TechLead #TeamManagement #SoftwareDevelopment #Leadership #DevOps #Agile #DamienBeaufils
[ScalaDaysNewYork2016] Nightmare Before Best Practices: Lessons from Failure
At Scala Days New York 2016, José Castro, a software engineer at Codacy, delivered a riveting presentation that diverged from the typical conference narrative. Instead of showcasing success stories, José shared cautionary tales of software development mishaps, emphasizing the critical importance of adhering to best practices to prevent costly errors. Through vivid anecdotes, he illustrated how neglecting simple procedures can lead to significant financial and operational setbacks, offering valuable lessons for developers.
The Costly Oversight in Payment Systems
José Castro began with a chilling account of a website launch that initially seemed successful but resulted in a €180,000 loss. The development team had integrated a shopping cart with a bank’s payment system, but for three weeks, no customer payments were processed. José recounted how a developer’s personal purchase revealed that the system was authorizing transactions without completing charges, a flaw unnoticed due to inadequate testing. The bank’s policy allowed only one week to finalize charges, rendering earlier transactions uncollectible. This oversight, José emphasized, could have been prevented with rigorous integration testing and automated checks to ensure payment flows were correctly implemented.
Deployment Disasters and Human Error
Another tale José shared involved a deployment error that brought down a critical system for 12 hours. A developer, tasked with updating a customer-facing application, accidentally deployed to the production environment instead of staging, overwriting essential configurations. The absence of proper deployment protocols and environment safeguards exacerbated the issue, leading to significant downtime. José highlighted the need for automated deployment pipelines and environment-specific configurations to prevent such human errors, ensuring that production systems remain insulated from untested changes.
The Perils of Inadequate Documentation
José also recounted a scenario where insufficient documentation led to a prolonged outage in a payment processing system. A critical configuration change was made without updating the documentation, leaving the team unable to troubleshoot when the system failed. This lack of clarity delayed recovery, costing the company valuable time and revenue. José advocated for documentation-driven development, where comprehensive records of system configurations and procedures are maintained, enabling quick resolution of issues and reducing dependency on individual knowledge.
Fostering a Healthy Code Review Culture
In addressing code review challenges, José discussed the emotional barriers developers face when receiving feedback. He shared an example of a team member who successfully separated personal ego from code quality, embracing constructive criticism. To mitigate conflicts, José recommended automated code review tools like Codacy, which provide objective feedback, reducing interpersonal tension. By automating routine checks, teams can focus on higher-level implementation discussions, fostering a collaborative environment and improving code quality without bruising egos.
Links:
[DevoxxUS2017] What Developers Should Know About Design by Erwin de Gier
At DevoxxUS2017, Erwin de Gier, a Software Architect at Sogeti, shared practical insights into design principles for developers, emphasizing their role in enhancing communication and product appeal. With a background in open-source technology and agile methodologies, Erwin highlighted how developers can make informed design decisions when designers are unavailable. His session, rich with actionable advice, focused on proportions, composition, and color, empowering developers to create visually appealing interfaces. This post explores the core themes of Erwin’s presentation, offering guidance for developers navigating design challenges.
Mastering Proportions and Composition
Erwin de Gier opened by addressing the importance of proportions in design, particularly when developers must create features like forms or buttons without a designer’s input. He advocated using fixed proportions, such as the golden ratio, to create balanced layouts. Erwin demonstrated how to structure interfaces using proportional boxes, ensuring visual harmony. His practical examples, drawn from his experience at Sogeti, illustrated how consistent proportions enhance user experience, making interfaces intuitive and aesthetically pleasing.
Strategic Use of Color and Typography
Transitioning to color and typography, Erwin emphasized consistency as a cornerstone of effective design. He recommended limiting color palettes to one or two primary colors, complemented by neutral tones like gray, white, or black, to maintain brand recognition. Using a brand color quiz, Erwin illustrated how colors like WhatsApp’s green shape user perception. For typography, he advised using proven font combinations, such as serif and sans-serif pairs, with a minimum size of 16 points for web readability. These principles, he noted, ensure designs remain accessible and professional.
Links:
[DevoxxUS2017] The Hardest Part of Microservices: Your Data by Christian Posta
At DevoxxUS2017, Christian Posta, a Principal Middleware Specialist at Red Hat, delivered an insightful presentation on the complexities of managing data in microservices architectures. Drawing from his extensive experience with distributed systems and open-source projects like Apache Kafka and Apache Camel, Christian explored how Domain-Driven Design (DDD) helps address data challenges. His talk, inspired by his blog post on ceposta Technology Blog, emphasized the importance of defining clear boundaries and leveraging event-driven technologies to achieve scalable, autonomous systems. This post delves into the key themes of Christian’s session, offering a comprehensive look at navigating data in microservices.
Understanding the Domain with DDD
Christian Posta began by addressing the critical need to understand the business domain when building microservices. He highlighted how DDD provides a framework for modeling complex domains by defining bounded contexts, entities, and aggregates. Using the example of a “book,” Christian illustrated how context shapes data definitions, such as distinguishing between a book as a single title versus multiple copies in a bookstore. This clarity, he argued, is essential for enterprises, where domains like insurance or finance are far more intricate than those of internet giants like Netflix. By grounding microservices in DDD, developers can create explicit boundaries that align with business needs, reducing ambiguity and fostering autonomy.
Defining Transactional Boundaries
Transitioning to transactional boundaries, Christian emphasized minimizing the scope of atomic operations to enhance scalability. He critiqued the traditional reliance on single, ACID-compliant databases, which often leads to brittle systems when applied to distributed architectures. Instead, he advocated for identifying the smallest units of business invariants, such as a single booking in a travel system, and managing them within bounded contexts. Christian’s insights, drawn from real-world projects, underscored the pitfalls of synchronous communication and the need for explicit boundaries to avoid coordination challenges like two-phase commits across services.
Event-Driven Communication with Apache Kafka
A core focus of Christian’s talk was the role of event-driven architectures in decoupling microservices. He introduced Apache Kafka as a backbone for streaming immutable events, enabling services to communicate without tight coupling. Christian explained how Kafka’s publish-subscribe model supports scalability and fault tolerance, allowing services to process events at their own pace. He highlighted practical applications, such as using Kafka to propagate changes across bounded contexts, ensuring eventual consistency while maintaining service autonomy. His demo showcased Kafka’s integration with microservices, illustrating its power in handling distributed data.
Leveraging Debezium for Data Synchronization
Christian also explored Debezium, an open-source platform for change data capture, to address historical data synchronization. He described how Debezium’s MySQL connector captures consistent snapshots and streams binlog changes to Kafka, enabling services to access past and present data. This approach, he noted, supports use cases where services need to synchronize from a specific point, such as “data from Monday.” Christian’s practical example demonstrated Debezium’s role in maintaining data integrity across distributed systems, reinforcing its value in microservices architectures.
Integrating Apache Camel for Robust Connectivity
Delving into connectivity, Christian showcased Apache Camel as a versatile integration framework for microservices. He explained how Camel facilitates communication between services by providing routing and transformation capabilities, complementing Kafka’s event streaming. Christian’s live demo illustrated Camel’s role in orchestrating data flows, ensuring seamless integration across heterogeneous systems. His experience as a committer on Camel underscored its reliability in building resilient microservices, particularly for enterprises transitioning from monolithic architectures.
Practical Implementation and Lessons Learned
Concluding, Christian presented a working example that tied together DDD, Kafka, Camel, and Debezium, demonstrating a cohesive microservices system. He emphasized the importance of explicit identity management, such as handling foreign keys across services, to maintain data integrity. Christian’s lessons, drawn from his work at Red Hat, highlighted the need for collaboration between developers and business stakeholders to refine domain models. His call to action encouraged attendees to explore these technologies and contribute to their open-source communities, fostering innovation in distributed systems.
Links:
[DotSecurity2017] Collective Authorities: Transparency & Decentralized Trust
In the labyrinthine landscape of digital governance, where singular sentinels succumb to sabotage or subversion, the paradigm of collective oversight emerges as a bulwark of resilience and reliability. Philipp Jovanovic, a cryptographer and postdoctoral researcher at EPFL’s Decentralized and Distributed Systems Lab, expounded this ethos at dotSecurity 2017, advocating for cothorities—cooperative clusters that distribute dominion, diminishing dependence on solitary stewards. Drawing from his expertise in provable security and distributed systems, Philipp illustrated how such syndicates safeguard services from time synchronization to software dissemination, fostering proactive transparency that eclipses centralized counterparts in robustness and accountability.
Philipp’s exposition began with authorities’ ubiquity: time servers calibrating clocks, DNS resolvers mapping monikers, certificate issuers endorsing identities—each pivotal yet precarious, vulnerable to breaches that cascade into chaos. A compromised chronometer corrupts certificates’ cadence; a DNS defector diverts domains to deceit. Traditional transparency—audits’ afterthoughts—proves reactive and rife with risk, susceptible to suppression or subversion. Cothorities counter this: constellations of collaborators, each holding shards of sovereignty, converging via consensus protocols to certify collective conduct.
At cothorities’ core lies collective signing: a threshold scheme where k-of-n nodes must concur, thwarting unilateral usurpation. Philipp probed protocols like ByzCoin, blending proof-of-work with practical Byzantine fault tolerance—blocks bolstered by collective endorsements, thwarting 51% sieges. Applications abound: randomness beacons via verifiable delay functions, sharded secrets yielding bias-resistant beacons; decentralized updates where pre-releases procure co-signatures post-verification, ensuring binary fidelity. EPFL’s instantiation—CoSi’s cascade—scales signatures sans synchrony, enabling efficient endorsements for vast validations.
This framework fortifies federated fabrics: software sanctums where binaries bear blockchain-like blessings, users verifying via viewer tools. Philipp’s prototype: Update Cothority, developers dispatching drafts, nodes nurturing builds—collective attestation attesting authenticity. Scalability’s symphony: logarithmic latencies, sub-minute settlements—throughput trouncing Bitcoin’s bottleneck.
Cothorities’ creed: decentralization’s dividend, transparency’s triumph—authorities augmented, trust atomized.
Singular Sentinels’ Susceptibility
Philipp parsed perils: time’s tampering topples TLS; DNS’s duplicity dupes domains. Audits’ inadequacy: reactive, repressible—cothorities’ corrective: syndicates’ synergy, threshold’s thwarts.
Protocols’ Pantheon and Applications’ Array
ByzCoin’s blend: PoW’s prelude, PBFT’s pact—CoSi’s cascade, sharding’s shards. Randomness’ radiance: beacons’ bias-bane; updates’ utopia: co-signed sanctity.