Recent Posts
Archives

Posts Tagged ‘MQ’

PostHeaderIcon From JMS and Message Queues to Kafka Streams: Why Kafka Had to Be Invented

For decades, enterprise systems relied on message queues and JMS-based brokers to decouple applications and ensure reliable communication. Technologies such as IBM MQ, ActiveMQ, and later RabbitMQ solved an important problem: how to move messages safely from one system to another without tight coupling.

However, as systems grew larger, more distributed, and more data-driven, the limitations of this model became increasingly apparent. Kafka — and later Kafka Streams — did not emerge because JMS and MQ were poorly designed. They emerged because they were designed for a different era and a different class of problems.

What JMS and MQ Were Designed to Do

Traditional message brokers focus on delivery. A producer sends a message, the broker stores it temporarily, and a consumer receives it. Once the message is acknowledged, it is typically removed. The broker’s primary responsibility is to guarantee that messages are delivered reliably and, in some cases, transactionally.

This model works very well for command-style interactions such as order submission, workflow orchestration, and request-driven integration between systems. Messages are transient by design, consumers are expected to be online, and the system’s success is measured by how quickly and reliably messages move through it.

For many years, this was sufficient.

The Problems That Started to Appear

As companies began operating at internet scale, the assumptions underlying JMS and MQ started to break down. Data volumes increased dramatically, and systems needed to handle not thousands, but millions of events per second. Message brokers that tracked delivery state per consumer became bottlenecks, both technically and operationally.

More importantly, the nature of the data changed. Events were no longer just instructions to be executed and discarded. They became facts: user actions, transactions, logs, metrics, and behavioral signals that needed to be stored, analyzed, and revisited.

With JMS and MQ, once a message was consumed, it was gone. Reprocessing required complex duplication strategies or external storage. Adding a new consumer meant replaying data manually, if it was even possible. The broker was optimized for delivery, not for history.

At the same time, architectures became more decoupled. Multiple teams wanted to consume the same data independently, at their own pace, and for different purposes. In a traditional queue-based system, this required copying messages or creating parallel queues, increasing cost and complexity.

These pressures revealed a fundamental mismatch between what message queues were built for and what modern systems required.

The Conceptual Shift That Led to Kafka

Kafka was created to answer a different question. Instead of asking how to deliver messages efficiently, its designers asked how to store events reliably at scale and allow many consumers to read them independently.

The key idea was deceptively simple: treat data as an append-only log. Producers write events to a log, and consumers read from that log at their own pace. Events are not deleted when consumed. They are retained for a configurable period, or even indefinitely.

In this model, the broker no longer tracks who consumed what. Each consumer keeps track of its own position. This small change eliminates a major scalability bottleneck and makes replay a natural operation rather than an exceptional one.

Kafka’s architecture reflects this shift. It is disk-first rather than memory-first, optimized for sequential writes and reads. It scales horizontally through partitioning. It treats durability and throughput as complementary goals rather than trade-offs.

Kafka was not created to replace message queues; it was created to solve problems message queues were never meant to solve.

From Transport to Platform: Why Kafka Streams Exists

Kafka alone provides storage and distribution of events, but it does not process them. Early Kafka users still needed external systems to transform, aggregate, and analyze data flowing through Kafka.

Kafka Streams was created to close this gap.

Instead of introducing another centralized processing cluster, Kafka Streams embeds stream processing directly into applications. This is a deliberate contrast with both JMS consumers and large external processing frameworks.

In a JMS-based system, consumers typically process messages one at a time, often statelessly, and rely on external databases for aggregation and state. Rebuilding state after a failure is complex and error-prone.

Kafka Streams, by contrast, assumes that stateful processing is normal. It provides abstractions for event streams and for state that evolves over time. It stores state locally for performance and backs it up to Kafka so it can be restored automatically. Processing logic, state, and data history are all aligned around the same event log.

This approach turns Kafka from a passive transport layer into an active data platform.

What Kafka and Kafka Streams Do Differently

The fundamental difference between JMS/MQ and Kafka is not syntax or APIs, but philosophy.

Message queues focus on messages as transient instructions. Kafka focuses on events as durable facts. Message queues optimize for delivery guarantees. Kafka optimizes for scalability, retention, and replay. Message queues treat consumers as part of the broker’s responsibility. Kafka treats consumers as independent actors.

Kafka Streams builds on this by assuming that computation belongs close to the data. Instead of shipping data to a processing engine, it ships processing logic to where the data already is. This inversion dramatically simplifies architectures while increasing reliability.

Why Someone “Woke Up and Created Kafka”

Kafka was born out of necessity. At companies like LinkedIn, existing messaging systems could not handle the volume, variety, and longevity of data they were producing. They needed a system that could ingest everything, store it reliably, and make it available to many consumers without coordination.

Kafka Streams followed naturally. Once data became durable and replayable, processing it in a stateless, fire-and-forget manner was no longer sufficient. Systems needed to compute continuously, maintain state, and recover automatically — all while remaining simple to operate.

Kafka and Kafka Streams are the result of rethinking messaging from first principles, in response to scale, data-driven architectures, and the need to treat events as first-class citizens.

Conclusion

JMS and traditional message queues remain excellent tools for command-based integration and transactional workflows. Kafka was not designed to replace them, but to address a different category of problems.

Kafka introduced the idea of a distributed, durable event log as the backbone of modern systems. Kafka Streams extended that idea by embedding real-time pro

PostHeaderIcon Problem: Spring JMS MessageListener Stuck / Not Receiving Messages

Scenario

A Spring Boot application using ActiveMQ with @JmsListener suddenly stops receiving messages after running for a while. No errors in logs, and the queue keeps growing, but the consumers seem idle.

Setup

@JmsListener(destination = "myQueue", concurrency = "5-10") public void processMessage(String message) { log.info("Received: {}", message); }
  • ActiveMQConnectionFactory was used.

  • The queue (myQueue) was filling up.

  • Restarting the app temporarily fixed the issue.


Investigation

  1. Checked ActiveMQ Monitoring (Web Console)

    • Messages were enqueued but not dequeued.

    • Consumers were still active, but not processing.

  2. Thread Dump Analysis

    • Found that listener threads were stuck in a waiting state.

    • The problem only occurred under high load.

  3. Checked JMS Acknowledgment Mode

    • Default AUTO_ACKNOWLEDGE was used.

    • Suspected an issue with message acknowledgment.

  4. Enabled Debug Logging

    • Added:

      logging.level.org.springframework.jms=DEBUG
    • Found repeated logs like:

      JmsListenerEndpointContainer#0-1 received message, but no further processing
    • This hinted at connection issues.

  5. Tested with a Different Message Broker

    • Using Artemis JMS instead of ActiveMQ resolved the issue.

    • Indicated that it was broker-specific.


Root Cause

ActiveMQ’s TCP connection was silently dropped, but the JMS client did not detect it.

  • When the connection is lost, DefaultMessageListenerContainer doesn’t always recover properly.

  • ActiveMQ does not always notify clients of broken connections.

  • No exceptions were thrown because the connection was technically “alive” but non-functional.


Fix

  1. Enabled keepAlive in ActiveMQ connection

    ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(); factory.setUseKeepAlive(true); factory.setOptimizeAcknowledge(true); return factory;
  2. Forced Reconnection with Exception Listener

    • Implemented:

      factory.setExceptionListener(exception -> { log.error("JMS Exception occurred, reconnecting...", exception); restartJmsListener(); });
    • This ensured that if a connection was dropped, the listener restarted.

  3. Switched to DefaultJmsListenerContainerFactory with DMLC

    • SimpleMessageListenerContainer was less reliable in handling reconnections.

    • New Configuration:

      @Bean public DefaultJmsListenerContainerFactory jmsListenerContainerFactory( ConnectionFactory connectionFactory) { DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory(); factory.setConnectionFactory(connectionFactory); factory.setSessionTransacted(true); factory.setErrorHandler(t -> log.error("JMS Listener error", t)); return factory; }

Final Outcome

✅ After applying these fixes, the issue never reoccurred.
🚀 The app remained stable even under high load.


Key Takeaways

  • Silent disconnections in ActiveMQ can cause message listeners to hang.

  • Enable keepAlive and optimizeAcknowledge for reliable connections.

  • Use DefaultJmsListenerContainerFactory with DMLC instead of SMLC.

  • Implement an ExceptionListener to restart the JMS connection if necessary.

 

PostHeaderIcon Mule / MQJE001 / MQJMS2007

Case

In a Mule ESB workflow, the endpoint is a <jms:outbound-endpoint>, pointing to a JMS queue hosted on MQ Series and accessed through WebLogic 10.3.3.

I get the following stracktrace

[java]Exception stack is:
1. MQJE001: Completion Code 2, Reason 2027 (com.ibm.mq.MQException)
com.ibm.mq.MQQueue:1624 (null)
2. MQJMS2007: failed to send message to MQ queue(JMS Code: MQJMS2007) (javax.jms.JMSException)
com.ibm.mq.jms.services.ConfigEnvironment:622 (http://java.sun.com/j2ee/sdk_1.3/techdocs/api/javax/jms/JMSException.html)
3. Failed to create and dispatch response event over Jms destination "queue://MQSERVER/AMQ.4C8A5E112285475605?persistence=1". Failed to route event via endpoint: null. Message payload is of type: JMSObjectMessage (org.mule.api.transport.DispatchException)
org.mule.transport.jms.JmsReplyToHandler:154 (http://www.mulesource.org/docs/site/current2/apidocs/org/mule/api/transport/DispatchException.html)[/java]

Fix

On Mule config file, explicitly set the attribute disableTemporaryReplyToDestinations at true in the JMS outbound tag:

[xml]<jms:outbound-endpoint
queue="jonathan.lalou.jms.queue"
connector-ref="jmsConnector"
transformer-refs="foo" disableTemporaryReplyToDestinations="true"/>[/xml]

PostHeaderIcon Tutorial: Use WebShere MQ as JMS provider within WebLogic 10.3.3, and Mule ESB as a client

Abstract

You have an application deployed on WebLogic 10 (used version for this tutorial: 10.3.3). You have to use an external provider for JMS, in our case MQ Series / WebSphere MQ.
The client side is a Mule ESB launched in standalone.

Prerequisites

You have:

  • a running WebLogic 10 with an admin instance and an another instance, in our case: Muletier.
  • a file file.bindings, used for MQ.

JARs installation

  • Stop all your WebLogic 10 running instances.
  • Get the JARs from MQ Series folders:
    • providerutil.jar
    • fscontext.jar
    • dhbcore.jar
    • connector.jar
    • commonservices.jar
    • com.ibm.mqjms.jar
    • com.ibm.mq.jar
  • Copy them in your domain additional libraries folder (usually: user_projects/domains/jonathanApplication/lib/)
  • Start WebLogic 10 admin. A block like this should appear:
    [java]&lt;Oct 15, 2010 12:09:21 PM CEST&gt; &lt;Notice&gt; &lt;WebLogicServer&gt; &lt;BEA-000395&gt; &lt;Following extensions directory contents added to the end of the classpath:
    C:\win32app\bea\user_projects\domains\jonathanApplication\lib\com.ibm.mq.jar;C:\win32app\bea\user_projects\domains\jonathanApplication\lib\com.ibm.mqjms.jar;C:\win32app\bea\user_projects\domains\jonathanApplication\lib\commonservices.jar;C:\win32app\bea\user_projects\domains\jonathanApplication\lib\connector.jar;C:\win32app\bea\user_projects\domains\jonathanApplication\lib\dhbcore.jar;C:\win32app\bea\user_projects\domains\jonathanApplication\lib\fscontext.jar;C:\win32app\bea\
    user_projects\domains\jonathanApplication\lib\providerutil.jar&gt;[/java]

Config

  • Get file.bindings, copy it into user_projects/domains/jonathanApplication/config/jms, rename it as .bindings (without any prefix)
  • Launch the console, login
  • JMS > JMS Modules > Create JMS System Module > Name: JmsMqModule. Leave other fields empty. > Next > target server MuleTier > Finish
  • Select JmsMqModule > New > Foreign Server > Name: MQForeignServer > keep check MuleTier > Finish
    • Select MQForeignServer >
    • Tab Connection Factories > New >
      • Name: MQForeignConnectionFactory
      • Local JNDI Name: the JNDI name on WebLogic side, eg: jonathanApplication/jms/connectionFactory/local (convention I could observe: separator on WebLogic: slash '/' ; unlike clients for which the separator in a dot '.')
      • Remote JNDI Name: the JNDI name on MQ side, eg: JONATHAN_APPLICATION.QCF
      • OK
    • Tab Destinations > New >
      • Queue of requests:
        • Name: JONATHAN.APPLICATION.REQUEST
        • Local JNDI Name: JONATHAN.APPLICATION.REQUEST
        • Remote JNDI Name: JONATHAN.APPLICATION.REQUEST
      • Queue of response:
        • Name: JONATHAN.APPLICATION.REPONSE
        • Local JNDI Name: JONATHAN.APPLICATION.REPONSE
        • Remote JNDI Name: JONATHAN.APPLICATION.REPONSE
      • NB: usually, MQ data are upper-cased and Java’s JNDI names are low-cased typed ; anyway (because of Windows not matching case?) here we use uppercase in for both names.

Mule

This part of the tutorial deals with a case of Mule ESB being your client application (sending and/or receiving JMS messages).

  • Get the archive wlfullclient.jar (56MB). Alternatively, you can generate it yourself: go to the server/lib directory of your WebLogic installation (usually: C:\win32app\bea\wlserver_10.3\server\lib, and run: java -jar wljarbuilder.jar
  • Copy the archive into $MULE_HOME/lib/user
  • Copy the seven jars above (providerutil.jar, fscontext.jar, dhbcore.jar, connector.jar, commonservices.jar, com.ibm.mqjms.jar, com.ibm.mq.jar) into the same folder: $MULE_HOME/lib/user
  • You can launch the mule. The config file is similar to any other configuration using standard JMS.