Posts Tagged ‘ActorModel’
[DevoxxFR2014] Akka Made Our Day: Harnessing Scalability and Resilience in Legacy Systems
Lecturers
Daniel Deogun and Daniel Sawano are senior consultants at Omega Point, a Stockholm-based consultancy with offices in Malmö and New York. Both specialize in building scalable, fault-tolerant systems, with Deogun focusing on distributed architectures and Sawano on integrating modern frameworks like Akka into enterprise environments. Their combined expertise in Java and Scala, along with practical experience in high-stakes projects, positions them as authoritative voices on leveraging Akka for real-world challenges.
Abstract
Akka, a toolkit for building concurrent, distributed, and resilient applications using the actor model, is renowned for its ability to deliver high-performance systems. However, integrating Akka into legacy environments—where entrenched codebases and conservative practices dominate—presents unique challenges. Delivered at Devoxx France 2014, this lecture shares insights from Omega Point’s experience developing an international, government-approved system using Akka in Java, despite Scala’s closer alignment with Akka’s APIs. The speakers explore how domain-specific requirements shaped their design, common pitfalls encountered, and strategies for success in both greenfield and brownfield contexts. Through detailed code examples, performance metrics, and lessons learned, the talk demonstrates Akka’s transformative potential and why Java was a strategic choice for business success. It concludes with practical advice for developers aiming to modernize legacy systems while maintaining reliability and scalability.
The Actor Model: A Foundation for Resilience
Akka’s core strength lies in its implementation of the actor model, a paradigm where lightweight actors encapsulate state and behavior, communicating solely through asynchronous messages. This eliminates shared mutable state, a common source of concurrency bugs in traditional multithreaded systems. Daniel Sawano introduces the concept with a simple Java-based Akka actor:
import akka.actor.UntypedActor;
public class GreetingActor extends UntypedActor {
@Override
public void onReceive(Object message) throws Exception {
if (message instanceof String) {
System.out.println("Hello, " + message);
getSender().tell("Greetings received!", getSelf());
} else {
unhandled(message);
}
}
}
This actor receives a string message, processes it, and responds to the sender. Actors run in an ActorSystem, which manages their lifecycle and threading:
import akka.actor.ActorSystem;
import akka.actor.ActorRef;
import akka.actor.Props;
ActorSystem system = ActorSystem.create("MySystem");
ActorRef greeter = system.actorOf(Props.create(GreetingActor.class), "greeter");
greeter.tell("World", ActorRef.noSender());
This setup ensures isolation and fault tolerance, as actors operate independently and can be supervised to handle failures gracefully.
Designing with Domain Requirements
The project discussed was a government-approved system requiring high throughput, strict auditability, and fault tolerance to meet regulatory standards. Deogun explains that they modeled domain entities as actor hierarchies, with parent actors supervising children to recover from failures. For example, a transaction processing system used actors to represent accounts, with each actor handling a subset of operations, ensuring scalability through message-passing.
The choice of Java over Scala was driven by business needs. While Scala’s concise syntax aligns closely with Akka’s functional style, the team’s familiarity with Java reduced onboarding time and aligned with the organization’s existing skill set. Java’s Akka API, though more verbose, supports all core features, including clustering and persistence. Sawano notes that this decision accelerated adoption in a conservative environment, as developers could leverage existing Java libraries and tools.
Pitfalls and Solutions in Akka Implementations
Implementing Akka in a legacy context revealed several challenges. One common issue was message loss in high-throughput scenarios. To address this, the team implemented acknowledgment protocols, ensuring reliable delivery:
public class ReliableActor extends UntypedActor {
@Override
public void onReceive(Object message) throws Exception {
if (message instanceof String) {
// Process message
getSender().tell("ACK", getSelf());
} else {
unhandled(message);
}
}
}
Deadlocks, another risk, were mitigated by avoiding blocking calls within actors. Instead, asynchronous futures were used for I/O operations:
import scala.concurrent.Future;
import static akka.pattern.Patterns.pipe;
Future<String> result = someAsyncOperation();
pipe(result, context().dispatcher()).to(getSender());
State management in distributed systems posed further challenges. Persistent actors ensured data durability by storing events to a journal:
import akka.persistence.UntypedPersistentActor;
public class PersistentCounter extends UntypedPersistentActor {
private int count = 0;
@Override
public String persistenceId() {
return "counter-id";
}
@Override
public void onReceiveCommand(Object command) {
if (command.equals("increment")) {
persist(1, evt -> count += evt);
}
}
@Override
public void onReceiveRecover(Object event) {
if (event instanceof Integer) {
count += (Integer) event;
}
}
}
This approach allowed the system to recover state after crashes, critical for regulatory compliance.
Performance and Scalability Achievements
The system achieved impressive performance, handling 100,000 requests per second with 99.9% uptime. Akka’s location transparency enabled clustering across nodes, distributing workload efficiently. Deogun highlights that actors’ lightweight nature—thousands can run on a single JVM—allowed scaling without heavy resource overhead. Metrics showed consistent latency under 10ms for critical operations, even under peak load.
Integrating Akka with Legacy Systems
Legacy integration required wrapping existing services in actors to isolate faults. For instance, a monolithic database layer was accessed via actors, which managed connection pooling and retry logic. This approach minimized changes to legacy code while introducing Akka’s resilience benefits. Sawano emphasizes that incremental adoption—starting with a single actor-based module—eased the transition.
Lessons Learned and Broader Implications
The project underscored Akka’s versatility in both greenfield and brownfield contexts. Key lessons included the importance of clear message contracts to avoid runtime errors and the need for robust monitoring to track actor performance. Tools like Typesafe Console (now Lightbend Telemetry) provided insights into message throughput and bottlenecks.
For developers, the talk offers a blueprint for modernizing legacy systems: start small, leverage Java for familiarity, and use Akka’s supervision for reliability. For organizations, it highlights the business value of resilience and scalability, particularly in regulated industries.
Conclusion: Akka as a Game-Changer
Deogun and Sawano’s experience demonstrates that Akka can transform legacy environments by providing a robust framework for concurrency and fault tolerance. Choosing Java over Scala proved strategic, aligning with team skills and accelerating delivery. As distributed systems become the norm, Akka’s actor model offers a proven path to scalability, making it a vital tool for modern software engineering.