Archive for the ‘en-US’ Category
[DevoxxPL2022] Challenges Running Planet-Wide Computer: Efficiency • Jacek Bzdak, Beata Strack
Jacek Bzdak and Beata Strack, software engineers at Google Poland, delivered an engaging session at Devoxx Poland 2022, exploring the intricacies of optimizing Google’s planet-scale computing infrastructure. Their talk focused on achieving efficiency in a distributed system spanning global data centers, emphasizing resource utilization, auto-scaling, and operational strategies. By sharing insights from Google’s internal cloud and Autopilot system, Jacek and Beata provided a blueprint for enhancing service performance while navigating the complexities of large-scale computing.
Defining Efficiency in a Global Fleet
Beata opened by framing Google’s data centers as a singular “planet-wide computer,” where efficiency translates to minimizing operational costs—servers, CPU, memory, data centers, and electricity. Key metrics like fleet-wide utilization, CPU/RAM allocation, and growth rate serve as proxies for these costs, though they are imperfect, often masking quality issues like inflated memory usage. Beata stressed that efficiency begins at the service level, where individual jobs must optimize resource consumption, and extends to the fleet through an ecosystem that maximizes resource sharing. This dual approach ensures that savings at the micro level scale globally, a principle applicable even to smaller organizations.
Auto-Scaling: Balancing Utilization and Reliability
Jacek, a member of Google’s Autopilot team, delved into auto-scaling, a critical mechanism for achieving high utilization without compromising reliability. Autopilot’s vertical scaling adjusts resource limits (CPU/memory) for fixed replicas, while horizontal scaling modifies replica counts. Jacek presented data from an Autopilot paper, showing that auto-scaled services maintain memory slack below 20% for median cases, compared to over 60% for manually managed services. Crucially, automation reduces outage risks by dynamically adjusting limits, as demonstrated in a real-world case where Autopilot preempted a memory-induced crash. However, auto-scaling introduces complexity, particularly feedback loops, where overzealous caching or load shedding can destabilize resource allocation, requiring careful integration with application-specific metrics.
Java-Specific Challenges in Auto-Scaling
The talk transitioned to language-specific hurdles, with Jacek highlighting Java’s unique challenges in auto-scaling environments. Just-in-Time (JIT) compilation during application startup spikes CPU usage, complicating horizontal scaling decisions. Memory management poses further issues, as Java’s heap size is static, and out-of-memory errors may be masked by garbage collection (GC) thrashing, where excessive CPU is devoted to GC rather than request handling. To address this, Google sets static heap sizes and auto-scales non-heap memory, though Jacek envisioned a future where Java aligns with other languages, eliminating heap-specific configurations. These insights underscore the need for language-aware auto-scaling strategies in heterogeneous environments.
Operational Strategies for Resource Reclamation
Beata concluded by discussing operational techniques like overcommit and workload colocation to reclaim unused resources. Overcommit leverages the low probability of simultaneous resource spikes across unrelated services, allowing Google to pack more workloads onto machines. Colocating high-priority serving jobs with lower-priority batch workloads enables resource reclamation, with batch tasks evicted when serving jobs demand capacity. A 2015 experiment demonstrated significant machine savings through colocation, a concept influencing Kubernetes’ design. These strategies, combined with auto-scaling, create a robust framework for efficiency, though they demand rigorous isolation to prevent interference between workloads.
Links:
[NodeCongress2021] From 1 to 101 Lambda Functions in Production: Evolving a Serverless Architecture – Slobodan Stojanovic
Charting a server’s demise unearths tales of unchecked escalation, yet Slobodan Stojanovic’s chronicle of Vacation Tracker—from solitary Lambda to century-strong ensemble—illuminates adaptive mastery. As co-founder and CTO at Cloud Horizon, Slobodan recounts bootstrapping a PTO sentinel for Slack, evolving through GraphQL mazes to serve millions, all while curbing costs under $2K since 2018.
Slobodan’s saga ignites in 2017: hackathon sparks, landing page lures 100+ waitlisters. 2018’s MVP—single Lambda parses Slack commands, DynamoDB persists—morphs via Serverless Framework, then Claudia.js for API orchestration.
Navigating Architectural Metamorphoses
Hexagonal tenets decouple: ports/adapters insulate cores, easing mocks for units. Early monolith yields to CQRS—separate read/write Lambdas—bolstering scalability. GraphQL unifies: Apollo resolvers dispatch to specialists, DynamoDB queries aggregate.
Migrations pivot: Mongo to Dynamo via interface swaps, data shuttles offline. Integrations? LocalStack emulates AWS; CI spins ephemeral tables, asserts via before/after hooks.
Monitoring, Costs, and Team Triumphs
Datadog dashboards query errs; alerts ping anomalies. Bugs bite—Dynamo scans balloon bills to $300/month, fixed via queries slashing RPS. Onboarding thrives: hexagonal clarity, workshops demystify.
Slobodan’s axioms: evolve with scale, hexagonal/CQRS affinity, integration rigor, vigilant oversight. Free webinars beckon, perpetuating serverless lore.
Links:
[SpringIO2022] Ahead Of Time and Native in Spring Boot 3.0
At Spring I/O 2022 in Barcelona, Brian Clozel and Stéphane Nicoll, both engineers at VMware, delivered a comprehensive session on Ahead Of Time (AOT) processing and native compilation in Spring Boot 3.0 and Spring Framework 6.0. Their talk explored the integration of GraalVM native capabilities, detailing the AOT engine’s design, its use by libraries, and practical steps for developers. Through a live demo, they showcased how to transform a Spring application into a native binary, highlighting performance gains and configuration challenges.
GraalVM Native Compilation: Core Concepts
Brian opened by introducing GraalVM, a versatile JVM supporting multiple languages and optimized Just-In-Time (JIT) compilation. The talk focused on its native compilation feature, which transforms Java applications into standalone binaries for specific CPU architectures. This process involves static analysis at build time, processing all classes on a fixed classpath, and determining reachable code. Benefits include memory efficiency (megabytes instead of gigabytes), millisecond startup times, and suitability for CLI tools, serverless functions, and high-density container deployments.
However, challenges exist. Static analysis may require additional reachability metadata for reflection or resources, as GraalVM cannot always infer runtime behavior. Brian demonstrated a case where reflection-based method invocation fails without metadata, as the native image excludes unreachable code. Debugging is less straightforward than with traditional JVMs, and Java agents, like OpenTelemetry, are unsupported. The speakers emphasized that AOT aims to bridge these gaps, making native compilation accessible for Spring applications.
Spring’s AOT Engine: Design and Integration
Stéphane detailed the AOT engine, a core component of Spring Framework 6.0-M4 and Spring Boot 3.0-M3, designed to preprocess application configurations at build time. Unlike annotation processors, it operates post-compilation, analyzing the bean factory and generating Java code to replace dynamic configuration parsing. This code, viewable in modern IDEs like IntelliJ, mimics hand-written configurations but is automatically generated, preserving package visibility and including Javadoc for clarity.
The engine supports two approaches: contributing reachability metadata for reflection or resources, or generating code to simplify static analysis. For example, a demo CLI application used Spring’s RuntimeHints API to register reflection for a SimpleHelloService class and include a classpath resource. The native build tools Gradle plugin, provided by the GraalVM team, integrates with Spring Boot’s plugin to trigger AOT processing and native compilation. Stéphane showed how the generated binary achieved rapid startup and low memory usage, with configuration classes handled automatically by the AOT engine.
Developer Cookbook: Making Applications Native-Ready
The speakers introduced a developer cookbook to guide Spring users toward native compatibility. The first step is running the application in AOT mode on the JVM, validating the engine’s understanding of the configuration without native compilation. This mode pre-processes the bean factory, reducing startup time and exposing issues early. Next, developers should reuse existing test suites, adapting them for AOT using generated sources and JUnit support. This identifies missing metadata, such as reflection or resource hints.
For third-party libraries or custom code, developers can contribute hints via the RuntimeHints API or validate them using a forthcoming Java agent. The GraalVM team is developing a reachability metadata repository, where the Spring team is contributing hints for popular libraries, reducing manual configuration. For advanced cases, developers can hook into the AOT engine to generate custom code, supported by a test compiler API to verify outcomes. Brian emphasized balancing hints and code generation, favoring simplicity unless performance demands otherwise.
Future Directions and Community Collaboration
The talk concluded with a roadmap for Spring Boot 3.0 and Spring Framework 6.0, targeting general availability by late 2022. The current milestones provide robust AOT infrastructure, with future releases expanding support for Spring libraries. The speakers highlighted collaboration with the GraalVM team to simplify native adoption and plans to align with Project Leyden for JVM optimizations. They encouraged feedback via the Spring I/O app and invited developers to explore the demo repository, which includes Maven and Gradle configurations.
This session equipped developers with tools to leverage AOT and native compilation, unlocking new use cases like serverless and high-density deployments while maintaining Spring’s developer-friendly ethos.
Links:
[DevoxxPL2022] How We Migrate Customers and Internal Teams to Kubernetes • Piotr Bochyński
At Devoxx Poland 2022, Piotr Bochyński, a seasoned cloud native expert at SAP, shared a compelling narrative on transitioning customers and internal teams from a Cloud Foundry-based platform to Kubernetes. His presentation illuminated the strategic imperatives, technical challenges, and practical solutions that defined SAP’s journey toward a multi-cloud Kubernetes ecosystem. By leveraging open-source projects like Kyma and Gardener, Piotr’s team addressed the limitations of their legacy platform, fostering developer productivity and operational scalability. His insights offer valuable lessons for organizations contemplating a similar migration.
Understanding Platform as a Service
Piotr began by contextualizing Platform as a Service (PaaS), a model that abstracts infrastructure complexities, allowing developers to focus on application development. Unlike Infrastructure as a Service (IaaS), which provides raw virtual machines, PaaS delivers managed runtimes, middleware, and automation, accelerating time-to-market. However, this convenience comes with trade-offs, such as reduced control and potential vendor lock-in, often tied to opinionated frameworks like the 12-factor application methodology. Piotr highlighted SAP’s initial adoption of Cloud Foundry, an open-source PaaS, to avoid vendor dependency while meeting multi-cloud requirements driven by legal and business needs, particularly in sectors like banking. Yet, Cloud Foundry’s constraints, such as single HTTP port exposure and reliance on outdated technologies like BOSH, prompted SAP to explore Kubernetes as a more flexible alternative.
Kubernetes: A Platform for Platforms
Kubernetes, as Piotr elucidated, is not a traditional PaaS but a container orchestration framework that serves as a foundation for building custom platforms. Its declarative API and extensibility distinguish it from predecessors, enabling consistent management of diverse resources like deployments, namespaces, and custom objects. Piotr illustrated this with the thermostat analogy: developers declare a desired state (e.g., 22 degrees), and Kubernetes controllers reconcile the actual state to match it. This pattern, applied uniformly across resources, empowers developers to extend Kubernetes with custom controllers, such as a hypothetical thermostat resource. The Kyma project, an open-source initiative led by SAP, builds on this extensibility, providing opinionated building blocks like Istio-based API gateways, NATS eventing, and serverless functions to bridge the gap between raw Kubernetes and a developer-friendly PaaS.
Overcoming Migration Challenges
The migration to Kubernetes presented multifaceted challenges, from technical complexity to cultural adoption. Piotr emphasized the steep learning curve associated with Kubernetes’ vast resource set, compounded by additional components like Prometheus and Istio. To mitigate this, SAP employed Kyma to abstract complexities, offering simplified resources like API rules that encapsulate Istio configurations for secure service exposure. Another hurdle was ensuring multi-cloud compatibility. SAP’s Gardener project, a managed Kubernetes solution, addressed this by providing a consistent, Kubernetes-compliant layer across providers like AWS, Azure, and Google Cloud. Piotr also discussed operational scalability, managing thousands of clusters for hundreds of teams. By applying the Kubernetes controller pattern, SAP automated cluster provisioning, upgrades, and security patching, reducing manual intervention and ensuring reliability.
Lessons from the Journey
Reflecting on the migration, Piotr candidly shared missteps that shaped SAP’s approach. Early attempts to shield users from Kubernetes’ complexity by mimicking Cloud Foundry’s API failed, as developers craved direct control over Kubernetes resources. Similarly, restricting cluster admin roles to prevent misconfigurations stifled innovation, leading SAP to grant greater flexibility. Some technology choices, like the Service Catalog project, proved inefficient, underscoring the importance of aligning with Kubernetes’ operator pattern. License changes in tools like Grafana also necessitated pivots, highlighting the need for vigilance in open-source dependencies. Piotr’s takeaways resonate broadly: Kubernetes is a long-term investment, requiring a balance of opinionated tooling and developer freedom, with automation as a cornerstone for scalability.
Links:
[SpringIO2022] Major Migrations Made Easy with OpenRewrite
Tim te Beek’s Spring I/O 2022 session introduced OpenRewrite, a powerful tool for automating large-scale Java migrations. As a Java consultant at JDriven, Tim shared his passion for updating outdated technology stacks, using OpenRewrite to streamline upgrades across frameworks, libraries, and languages. His talk, delivered on his birthday, combined a compelling narrative with a live demo, showcasing how OpenRewrite transforms tedious migrations into quick, safe operations.
The Migration Challenge: Keeping Up with Open Source
Tim opened with a decade-long perspective on Java and Spring evolution, from Spring Framework 2.5 in 2009 to Java 17 and Spring Boot 2 in 2022. Each release—Java 8’s lambdas, Spring Boot’s reduced boilerplate, JUnit 5, or Java 11’s JAX-B dependencies—introduced valuable features but required manual upgrades across multiple services. Vulnerabilities like Log4Shell further necessitate rapid migrations, often under pressure. For large organizations with thousands of services, manual updates are impractical, making automation essential.
OpenRewrite addresses this by leveraging an abstract syntax tree (AST) to perform precise, safe refactorings. Unlike simple search-and-replace, it understands code context, preserving formatting and ensuring functional integrity. Tim emphasized its ability to handle migrations like JUnit 4 to 5, Log4j to SLF4J, or Spring Boot 1 to 2, reducing technical debt in minutes.
How OpenRewrite Works: Recipes and AST Magic
OpenRewrite’s core strength lies in its recipe-based approach. Recipes are modular, reusable transformations—implemented as Java visitors—that modify the AST. Tim explained how recipes range from simple (changing imports) to complex (converting JUnit 4’s expected exceptions to JUnit 5’s assertThrows). These can be combined into modules for tasks like framework upgrades or style enforcement. The tool supports Java, Groovy, YAML, and XML, enabling changes to Maven/Gradle builds and Spring configurations.
A key differentiator is OpenRewrite’s type attribution and format preservation, ensuring changes blend seamlessly with existing code. Tim’s demo illustrated this by migrating a Spring Pet Clinic project from Spring Boot 1.5 (Java 8) to Spring Boot 2.5 (Java 17). Using Maven’s OpenRewrite plugin, he applied recipes to update dependencies, imports, annotations, and properties, completing the migration in under 15 seconds per step, with only two minor test failures requiring manual fixes.
Spring Boot Migrator: Enhancing OpenRewrite
Tim introduced Spring Boot Migrator, an experimental Spring project built on OpenRewrite, designed to simplify migrations to Spring Boot. Initiated by VMware Labs in 2020 and led by Fabian Krüger, it offers an opinionated API for Spring-specific migrations, such as Java EE to Spring or NetWeaver to Spring Integration. Unlike OpenRewrite’s fully automated recipes, Spring Boot Migrator provides an interactive workflow, generating HTML reports to guide developers through component identification and transformation steps.
Looking ahead, Spring Boot Migrator aims to support Spring Framework 6 and Spring Boot 3, expected in November 2022, and facilitate cloud migrations to GraalVM. Tim encouraged community contributions, noting its role in easing enterprise migrations for VMware customers.
Impact and Community: Scaling Automation
OpenRewrite’s open-source model, backed by Moderne, ensures all recipes are Apache-licensed, fostering community-driven development. Tim highlighted its use in fixing static analysis issues (e.g., Checkstyle, Sonar), enforcing code style, and contributing to open-source projects like WireMock and Apache Maven. He shared his experience migrating thousands of unit tests, urging attendees to explore OpenRewrite’s web interface (app.moderne.io) and contribute recipes.
Tim’s talk inspired developers to embrace automation, reducing migration pain and enabling focus on innovation. His enthusiasm for OpenRewrite’s potential to transform development workflows resonated strongly with the audience.
Links:
[DevoxxPL2022] Java 17 & 18: What’s New and Noteworthy • Piotr Przybył
Piotr Przybył, a seasoned software gardener at AtomicJar, captivated the audience at Devoxx Poland 2022 with a comprehensive deep dive into the new features and enhancements introduced in Java 17 and 18. His presentation, rich with technical insights and practical demonstrations, explored key updates that empower developers to write more robust, maintainable, and efficient code. Piotr’s engaging style, peppered with humor and real-world examples, provided a clear roadmap for leveraging these advancements in modern Java development.
Sealed Classes for Controlled Inheritance
One of the standout features of Java 17 is sealed classes, introduced as JEP 409. Piotr explained how sealed classes allow developers to restrict which classes or interfaces can extend or implement a given type, offering fine-grained control over inheritance. This is particularly useful for library maintainers who want to prevent unintended code reuse while allowing specific extensions. By using the sealed keyword and a permits clause, developers can define a closed set of subclasses, with options to mark them as final, sealed, or non-sealed. Piotr’s demo illustrated this with a library type hierarchy, showing how sealed classes enhance code maintainability and prevent misuse through inheritance.
Enhanced Encapsulation and UTF-8 by Default
Java 17’s JEP 403 strengthens encapsulation by removing illegal reflective access, a change Piotr humorously likened to “closing the gates to reflection demons.” Previously, developers could bypass encapsulation using setAccessible(true), but Java 17 enforces stricter access controls, requiring code fixes or the use of --add-opens flags for legacy systems. Additionally, Java 18’s JEP 400 sets UTF-8 as the default charset for I/O operations, resolving discrepancies across platforms. Piotr demonstrated how to handle encoding issues, advising developers to explicitly specify charsets to ensure compatibility, especially for Windows users.
Deprecating Finalization and Introducing Simple Web Server
Java 18’s JEP 421 marks the deprecation of the finalize method for removal, signaling the end of a problematic mechanism for resource cleanup. Piotr’s demo highlighted the non-deterministic nature of finalization, advocating for try-with-resources as a modern alternative. He also showcased Java 18’s simple web server (JEP 408), a lightweight tool for serving static files during development or testing. Through a programmatic example, Piotr demonstrated how to start a server on port 9000 and dynamically modify CSS files, emphasizing its utility for quick prototyping.
Pattern Matching for Switch and Foreign Function API
Piotr explored Java 18’s pattern matching for switch (JEP 420), a preview feature that enhances switch statements and expressions. This feature supports null handling, guarded patterns, and type-based switching, eliminating the need for cumbersome if-else checks. His demo showed how to switch over objects, handle null cases, and use guards to refine conditions, making code more concise and readable. Additionally, Piotr introduced the Foreign Function and Memory API (JEP 419), an incubator module for safe, efficient interoperation with native code. He demonstrated allocating off-heap memory and calling C functions, highlighting the API’s thread-safety and scope-bound memory management.
Random Generators and Deserialization Filters
Java 17’s JEP 356 introduces enhanced pseudo-random number generators, offering a unified interface for various random number implementations. Piotr’s demo showcased switching between generators like Random, SecureRandom, and ThreadLocalRandom, simplifying random number generation for diverse use cases. Java 17 also improves deserialization filters (JEP 415), allowing per-stream customization to enhance security against malicious data. These updates, combined with other enhancements like macOS Metal rendering and larger G1 heap regions, underscore Java’s commitment to performance and security.
Links:
[DevoxxPL2022] Integrate Hibernate with Your Elasticsearch Database • Bartosz de Boulange
At Devoxx Poland 2022, Bartosz de Boulange, a Java developer at BGŻ BNP Paribas, Poland’s national development bank, delivered an insightful presentation on Hibernate Search, a powerful tool that seamlessly integrates traditional Object-Relational Mapping (ORM) with NoSQL databases like Elasticsearch. Bartosz’s talk focused on enabling full-text search capabilities within SQL-based applications, offering a practical solution for developers seeking to enhance search functionality without migrating entirely to a NoSQL ecosystem. Through a blend of theoretical insights and hands-on coding demonstrations, he illustrated how Hibernate Search can address complex search requirements in modern applications.
The Power of Full-Text Search
Bartosz began by addressing the challenge of implementing robust search functionality in applications backed by SQL databases. For instance, in a bookstore application, users might need to search for specific phrases within thousands of reviews. Traditional SQL queries, such as LIKE statements, are often inadequate for such tasks due to their limited ability to handle complex text analysis. Hibernate Search solves this by enabling full-text search, which includes character filtering, tokenization, and normalization. These features allow developers to remove irrelevant characters, break text into searchable tokens, and standardize data for efficient querying. Unlike native SQL full-text search capabilities, Hibernate Search offers a more streamlined and scalable approach, making it ideal for applications requiring sophisticated search features.
Integrating Hibernate with Elasticsearch
The core of Bartosz’s presentation was a step-by-step guide to integrating Hibernate Search with Elasticsearch. He outlined five key steps: creating JPA entities, adding Hibernate Search dependencies, annotating entities for indexing, configuring fields for NoSQL storage, and performing initial indexing. By annotating entities with @Indexed, developers can create indexes in Elasticsearch at application startup. Fields are annotated as @FullTextField for tokenization and search, @KeywordField for sorting, or @GenericField for basic querying. Bartosz emphasized the importance of the @FullTextField, which enables advanced search capabilities like fuzzy matching and phrase queries. His live coding demo showcased how to set up a Docker Compose file with MySQL and Elasticsearch, configure the application, and index a bookstore’s data, demonstrating the ease of integrating these technologies.
Scalability and Synchronization Challenges
A significant advantage of using Elasticsearch with Hibernate Search is its scalability. Unlike Apache Lucene, which is limited to a single node and suited for smaller projects, Elasticsearch supports distributed data across multiple nodes, making it ideal for enterprise applications. However, Bartosz highlighted a key challenge: synchronization between SQL and NoSQL databases. Changes in the SQL database may not immediately reflect in Elasticsearch due to communication overhead. To address this, he introduced an experimental outbox polling coordination strategy, which uses additional SQL tables to maintain update order. While still in development, this feature promises to improve data consistency, a critical aspect for production environments.
Practical Applications and Benefits
Bartosz demonstrated practical applications of Hibernate Search through a bookstore example, where users could search for books by title, description, or reviews. His demo showed how to query Elasticsearch for terms like “Hibernate” or “programming,” retrieving relevant results ranked by relevance. Additionally, Hibernate Search supports advanced features like sorting by distance for geolocation-based queries and projections for retrieving partial documents, reducing reliance on the SQL database for certain operations. These capabilities make Hibernate Search a versatile tool for developers aiming to enhance search performance while maintaining their existing SQL infrastructure.
Links:
[DevoxxPL2022] No Nonsense Talk About the Cost of Running a Business
Bartek Gerlich, General Manager at 4SuITs Technology, delivered a candid talk at Devoxx Poland 2022 on the operational costs of running an IT company in Poland. Drawing from his experience building digital products for Caesars Entertainment and serving on the board of Plantwear, Bartek provided a detailed breakdown of costs at various company sizes, focusing on a B2B-focused limited liability company with a growth-oriented expense model.
Initial Setup Costs
Before establishing a company, key expenses include:
- Court Fees: Approximately 600 PLN for registration.
- Initial Capital: 5,000 PLN, which can be used for business expenses.
- Legal Fees: Around 1,000 PLN for a simple contract to ensure a smooth application process.
- Virtual Office: A few hundred PLN for a business address, avoiding personal address complications.
Costs with One Employee
Hiring the first employee introduces additional expenses:
- Legal Fees: Customized B2B contracts cost slightly more than boilerplate ones, but employment contracts significantly increase paperwork (e.g., work and safety regulations), tripling costs.
- Recruitment: For a junior developer, expect 10-20k PLN; mid-level 25k PLN; senior higher. These are ballpark figures for estimation.
- Accounting: Full accounting for a small company (10 documents/month) costs about 350 PLN.
- Equipment: Providing a laptop is advisable to protect intellectual property, costing around 3-5k PLN. Leasing reduces initial costs but increases long-term expenses.
Costs with Ten Employees
Scaling to a 10-person team, typically comprising five developers, two QAs, a project manager, a UX/UI designer, and a specialist (e.g., cloud engineer), incurs:
- Salaries: Developers and QAs average 21k PLN/month each; specialists around 30k PLN. A team manager or admin costs 7-8k PLN.
- Recruitment: External agencies charge 1-2 developer salaries per hire (e.g., 180k PLN for six hires). Recruitment process outsourcing (15k PLN/month) yields about two hires/month, while an internal recruiter (cheaper but slower) yields one hire/month.
- Office Options:
- Co-working: 2,500 PLN for occasional seats and conference rooms.
- Standalone Rental: 3,000 PLN, including utilities like coffee and electricity.
- Fully Managed Space: 4,000 PLN for four seats with shared amenities.
- Other Expenses:
- Legal Fees: 1,000 PLN for 8-10 hours/month of contract work.
- Accounting: 500 PLN for increased documentation.
- Equipment: 2,000 PLN for laptops, monitors, printers, etc., with 8-10% annual maintenance (e.g., 120k PLN total equipment yields 10-12k PLN/year maintenance).
- Utilities: Minimal, included in office costs.
- Total Monthly Cost: Approximately 250,000 PLN.
Costs with Fifty Employees
At 50 employees, the company resembles a scalable enterprise, with new roles like managers, enterprise sales reps, HR, and more senior admins:
- Salaries: Developers, QAs, PMs, QAs, UI/UX, and specialists continue, with managers and sales reps at ~30k PLN/month; senior admins at ~10K PLN; HR specialists at ~15k PLN. Ideally, 80% of staff generate revenue, with 20% in support roles, though middle management bloat can disrupt this.
- Recruitment: Costs scale with hires, with similar models (success-based, outsourcing, or internal).
- Office Costs:
- Standalone Rental: 30k PLN, requiring admin or security.
- Fully Managed Space: 50k PLN for 40 seats.
- A1/A1+ Commercial Space: 60k PLN (e.g., 15-20 EUR/sq.m in Warsaw, including shared spaces like toilets, corridors).
- Other Expenses:
- Legal Fees: 8-10k PLN/month for complex contracts.
- Accounting/Payroll: 8k PLN/month, higher for B2B contracts than employment contracts.
- Employee Benefits: 15k PLN/month for multisport, better coffee, or outings.
- Utilities: ~5k PLN/month.
- Travel: ~10k PLN/month for 10 travel days at 300 EUR/day.
- Total Monthly Cost: ~1.4 million PLN.
Scaling Beyond
Beyond 50 employees, costs scale linearly for office space, equipment, and recruitment, but non-linearly for salaries (due to increased management needs) and legal fees (due to disputes or complex contracts). Benefits and expenses also rise faster for larger team events or branding efforts.
Cost-Saving Strategies
- Small Teams (<10): Handle operations personally to save on admin/legal, use legal fees, and opt for fully remote to eliminate office costs, though admin logistics (e.g., contracts, equipment shipping) persist.
- Larger Teams:
- In-house Services: Internalize recruitment, admin, or legal services to reduce costs, though efficiency may suffer compared to third-party firms.
- Office Optimization: Use smaller, presentable spaces or hybrid models, but account for meeting/storage needs.
- Flat Hierarchy: Minimize middle management to maintain a lean structure.
- Junior Talent: Develop juniors in-house for cost savings, though it requires patience, with slower output initially.
- Software Tools: Use off-the-shelf solutions (e.g., Salesforce) with minimal customization to avoid expensive modifications.
Business Strategy Insights
Bartek addressed audience questions, noting:
– A healthy profit margin is ~20% to ensure cash flow and resilience against market shifts (e.g., recessions). Margins below ~7-10% are unsustainable.
– To avoid payment delays, secure credit lines or funding to maintain employee trust, as developers can easily find alternative employment elsewhere.
– Bootstrapping allows fast failure, validating ideas organically, but limits scale. Venture capital accelerates growth but requires strong pitching skills, often a challenge in Poland due to cultural gaps.
– Small businesses can succeed with modest profits (e.g., 600k PLN/year for a 10-person team at 10 people) without pursuing aggressive growth, unlike stock-market-driven firms needing constant expansion.
Conclusion
Running an IT business in Poland involves significant operational costs, dominated by salaries but with substantial non-profit-generating expenses (~20-40%). Strategic planning, cost optimization, and a clear growth vision are essential for profitability and sustainability. Bartek’s insights provide a practical guide for aspiring entrepreneurs navigating the financial realities of the IT sector.
Links:
[DevoxxPL2022] Successful AI-NLP Project: What You Need to Know
At Devoxx Poland 2022, Robert Wcisło and Łukasz Matug, data scientists at UBS, shared insights on ensuring the success of AI and NLP projects, drawing from their experience implementing AI solutions in a large investment bank. Their presentation highlighted critical success factors for deploying machine learning (ML) models into production, addressing common pitfalls and offering practical guidance across the project lifecycle.
Understanding the Challenges
The speakers noted that enthusiasm for AI often outpaces practical outcomes, with 2018 data indicating only 10% of ML projects reached production. While this figure may have improved, many projects still fail due to misaligned expectations or inadequate preparation. To counter this, they outlined a simplified three-phase process—Prepare, Build, and Maintain—integrating Software Development Lifecycle (SDLC) and MLOps principles, with a focus on delivering business value and user experience.
Prepare Phase: Setting the Foundation
Łukasz emphasized the importance of the Prepare phase, where clarity on business needs is critical. Many stakeholders, inspired by AI hype, expect miraculous solutions without defining specific outcomes. Key considerations include:
- Defining the Output: Understand the business problem and desired results, such as labeling outcomes (e.g., fraud detection). Reduce ambiguity by explicitly defining what the application should achieve.
- Evaluating ML Necessity: ML excels in areas like recommendation systems, language understanding, anomaly detection, and personalization, but it’s not a universal solution. For one-off problems, simpler analytics may suffice.
- Red Flags: ML models rarely achieve 100% accuracy, requiring more data and testing for higher precision, which increases costs. Highly regulated industries may demand transparency, posing challenges for complex models. Data availability is also critical—without sufficient data, ML is infeasible, though workarounds like transfer learning or purchasing data exist.
- Universal Performance Metric: Establish a metric aligned with business goals (e.g., click-through rate, precision/recall) to measure success, unify stakeholder expectations, and guide development priorities for cost efficiency.
- Tooling and Infrastructure: Align software and data science teams with shared tools (e.g., Git, data access, experiment logs). Ensure compliance with data restrictions (e.g., GDPR, cross-border rules) and secure access to production-like data and infrastructure (e.g., GPUs).
- Automation Levels: Decide the role of AI—ranging from no AI (human baseline) to full automation. Partial automation, where models handle clear cases and humans review uncertain ones, is often practical. Consider ethical principles like fairness, compliance, and no-harm to avoid bias or regulatory issues.
- Model Utilization: Plan how the model will be served—binary distribution, API service, embedded application, or self-service platform. Each approach impacts user experience, scalability, and maintenance.
- Scalability and Reuse: Design for scalability and consider reusing datasets or models to enhance future projects and reduce costs.
Build Phase: Crafting the Model
Robert focused on the Build phase, offering technical tips to streamline development:
- Data Management: Data evolves, requiring retraining to address drift. For NLP projects, cover diverse document templates, including slang or errors. Track data provenance and lineage to monitor sources and transformations, ensuring pipeline stability.
- Data Quality: Most ML projects involve smaller datasets (hundreds to thousands of points), where quality trumps quantity. Address imbalances by collaborating with clients for better data or using simpler models. Perform sanity checks to ensure representativeness, avoiding overly curated data that misaligns with production (e.g., professional photos vs. smartphone images).
- Metadata and Tagging: Use tags (e.g., source, date, document type) to simplify debugging and maintenance. For instance, identifying underperforming data (e.g., low-quality German PDFs) becomes easier with metadata.
- Labeling Strategy: Noisy or ambiguous labels (e.g., misinterpreting “bridges” as Jeff Bridges or drawings vs. physical bicycles) degrade model performance. Aim for human-level performance (HLP), either against ground truth (e.g., biopsy results) or inter-human agreement. A consistent labeling strategy, documented with clear examples, reduces ambiguity and improves data quality. Tools like AWS Mechanical Turk or in-house labeling platforms can streamline this process.
- Training Tips: Use transfer learning to leverage pre-trained models, reducing data needs. Active learning prioritizes labeling hard examples, while pseudo-labeling uses existing models to pre-annotate data, saving time if the model is reliable. Ensure determinism by fixing seeds for reproducibility during debugging. Start with lightweight models (e.g., BERT Tiny) to establish baselines before scaling to complex models.
- Baselines: Compare against prior models, heuristic-based systems, or simple proofs-of-concept to contextualize progress toward HLP. An 85% accuracy may be sufficient if it aligns with HLP, but 60% after extensive effort signals issues.
Maintain Phase: Sustaining Performance
Maintenance is critical as ML models differ from traditional software due to data drift and evolving inputs. Strategies include:
- Deployment Techniques: Use A/B testing to compare model versions, shadow mode to evaluate models in parallel with human processes, canary deployments to test on a small traffic subset, or blue-green deployments for seamless rollbacks.
- Monitoring: Beyond system metrics, monitor input (e.g., image brightness, speech volume, input length) and output (e.g., exact predictions, user behavior like query frequency). Detect data or concept drift to maintain relevance.
- Reuse: Reuse models, data, and experiences to reduce uncertainty, lower costs, and build organizational capabilities for future projects.
Key Takeaways
The speakers stressed reusing existing resources to demystify AI, reduce costs, and enhance efficiency. By addressing business needs, data quality, and operational challenges early, teams can increase the likelihood of delivering impactful AI-NLP solutions. They invited attendees to discuss further at the UBS stand, emphasizing practical application over theoretical magic.
Links:
[GopherCon UK 2022] Leading in Tech
Leading in Tech – Michael Cullum
At GopherCon UK 2022, Michael Cullum, Head of Engineering at Bud, delivered an engaging talk on the multifaceted nature of leadership in the tech industry. With a wealth of experience in engineering leadership, Cullum explored what it means to be a leader, the diverse forms leadership takes, and how individuals can cultivate and identify effective leadership. His talk underscored that leadership is not confined to titles but is a universal opportunity to inspire and support others, making it a critical skill for all tech professionals.
Defining Leadership: Beyond Titles and Tasks
Cullum began by tackling the elusive definition of leadership, noting that even dictionaries and academic papers struggle to pin it down. He proposed that leadership is about promoting movement or change in others, not through coercion but by encouraging and supporting them. Unlike management, which often involves tasks like hiring or performance oversight, leadership focuses on the individuals being led, prioritizing their growth over the leader’s ego. Cullum emphasized that leadership is not about issuing orders but about fostering an environment where people are motivated to excel. This distinction is vital in tech, where roles like tech leads or managers can blur the line between task-oriented management and people-centric leadership.
Exploring Leadership Roles in Tech
Leadership in tech manifests in various forms, each with unique responsibilities. Cullum highlighted mentorship as a foundational leadership role, accessible to all regardless of seniority. Mentoring, whether formal or informal, involves sharing experiences to guide others, yet the industry often falls short in formalizing these relationships. Tech leads, another key role, translate business needs into technical direction but frequently focus on tasks like project management rather than inspiring their teams. Principal or staff engineers lead by example, serving as go-to experts who inspire through technical excellence. Public leaders, such as bloggers or conference speakers, drive change by sharing knowledge, while managers and senior leaders (e.g., CTOs) balance individual support with organizational goals. Cullum stressed that all these roles, when executed with a focus on others, embody leadership.
Traits of Effective Leaders
What makes a leader exceptional? Cullum outlined several critical traits. Listening—not just hearing but understanding—is paramount, as it fosters empathy and uncovers others’ needs. Leaders must communicate clearly, giving people time to digest complex ideas, and be mindful of power dynamics, speaking last in discussions to avoid stifling input. Generating energy and inspiring others, whether through actions or enthusiasm, is essential, as is maintaining a team-oriented mindset to avoid “us vs. them” divides. For tech leaders, staying technical—within reason—keeps them grounded, while managing team stress involves shielding members from undue pressure without hiding critical information. Cullum’s “poop analogy” illustrated this: great leaders act as umbrellas, filtering stress, not fans that scatter it chaotically.
Becoming and Finding Great Leaders
Cullum concluded with practical advice for aspiring leaders and those seeking them. Mentoring others, even informally, is the first step toward leadership, while seeking mentors outside one’s company provides unbiased guidance. Observing both good and bad leaders offers valuable lessons, and resources like books (e.g., The Manager’s Path by Camille Fournier) and communities like the Rands Leadership Slack enhance growth. When job hunting, Cullum urged asking about leadership style, vision, and team dynamics, as these outweigh transient tech stacks in importance. Great leaders respect, mentor, and prioritize your growth, fostering environments where you feel valued and inspired. By holding leaders to high standards and embracing leadership opportunities, everyone can contribute to a thriving tech ecosystem.
Links:
Hashtags: #Leadership #TechLeadership #Mentorship #GopherCon #MichaelCullum #Bud