Posts Tagged ‘JMS’
From JMS and Message Queues to Kafka Streams: Why Kafka Had to Be Invented
For decades, enterprise systems relied on message queues and JMS-based brokers to decouple applications and ensure reliable communication. Technologies such as IBM MQ, ActiveMQ, and later RabbitMQ solved an important problem: how to move messages safely from one system to another without tight coupling.
However, as systems grew larger, more distributed, and more data-driven, the limitations of this model became increasingly apparent. Kafka — and later Kafka Streams — did not emerge because JMS and MQ were poorly designed. They emerged because they were designed for a different era and a different class of problems.
What JMS and MQ Were Designed to Do
Traditional message brokers focus on delivery. A producer sends a message, the broker stores it temporarily, and a consumer receives it. Once the message is acknowledged, it is typically removed. The broker’s primary responsibility is to guarantee that messages are delivered reliably and, in some cases, transactionally.
This model works very well for command-style interactions such as order submission, workflow orchestration, and request-driven integration between systems. Messages are transient by design, consumers are expected to be online, and the system’s success is measured by how quickly and reliably messages move through it.
For many years, this was sufficient.
The Problems That Started to Appear
As companies began operating at internet scale, the assumptions underlying JMS and MQ started to break down. Data volumes increased dramatically, and systems needed to handle not thousands, but millions of events per second. Message brokers that tracked delivery state per consumer became bottlenecks, both technically and operationally.
More importantly, the nature of the data changed. Events were no longer just instructions to be executed and discarded. They became facts: user actions, transactions, logs, metrics, and behavioral signals that needed to be stored, analyzed, and revisited.
With JMS and MQ, once a message was consumed, it was gone. Reprocessing required complex duplication strategies or external storage. Adding a new consumer meant replaying data manually, if it was even possible. The broker was optimized for delivery, not for history.
At the same time, architectures became more decoupled. Multiple teams wanted to consume the same data independently, at their own pace, and for different purposes. In a traditional queue-based system, this required copying messages or creating parallel queues, increasing cost and complexity.
These pressures revealed a fundamental mismatch between what message queues were built for and what modern systems required.
The Conceptual Shift That Led to Kafka
Kafka was created to answer a different question. Instead of asking how to deliver messages efficiently, its designers asked how to store events reliably at scale and allow many consumers to read them independently.
The key idea was deceptively simple: treat data as an append-only log. Producers write events to a log, and consumers read from that log at their own pace. Events are not deleted when consumed. They are retained for a configurable period, or even indefinitely.
In this model, the broker no longer tracks who consumed what. Each consumer keeps track of its own position. This small change eliminates a major scalability bottleneck and makes replay a natural operation rather than an exceptional one.
Kafka’s architecture reflects this shift. It is disk-first rather than memory-first, optimized for sequential writes and reads. It scales horizontally through partitioning. It treats durability and throughput as complementary goals rather than trade-offs.
Kafka was not created to replace message queues; it was created to solve problems message queues were never meant to solve.
From Transport to Platform: Why Kafka Streams Exists
Kafka alone provides storage and distribution of events, but it does not process them. Early Kafka users still needed external systems to transform, aggregate, and analyze data flowing through Kafka.
Kafka Streams was created to close this gap.
Instead of introducing another centralized processing cluster, Kafka Streams embeds stream processing directly into applications. This is a deliberate contrast with both JMS consumers and large external processing frameworks.
In a JMS-based system, consumers typically process messages one at a time, often statelessly, and rely on external databases for aggregation and state. Rebuilding state after a failure is complex and error-prone.
Kafka Streams, by contrast, assumes that stateful processing is normal. It provides abstractions for event streams and for state that evolves over time. It stores state locally for performance and backs it up to Kafka so it can be restored automatically. Processing logic, state, and data history are all aligned around the same event log.
This approach turns Kafka from a passive transport layer into an active data platform.
What Kafka and Kafka Streams Do Differently
The fundamental difference between JMS/MQ and Kafka is not syntax or APIs, but philosophy.
Message queues focus on messages as transient instructions. Kafka focuses on events as durable facts. Message queues optimize for delivery guarantees. Kafka optimizes for scalability, retention, and replay. Message queues treat consumers as part of the broker’s responsibility. Kafka treats consumers as independent actors.
Kafka Streams builds on this by assuming that computation belongs close to the data. Instead of shipping data to a processing engine, it ships processing logic to where the data already is. This inversion dramatically simplifies architectures while increasing reliability.
Why Someone “Woke Up and Created Kafka”
Kafka was born out of necessity. At companies like LinkedIn, existing messaging systems could not handle the volume, variety, and longevity of data they were producing. They needed a system that could ingest everything, store it reliably, and make it available to many consumers without coordination.
Kafka Streams followed naturally. Once data became durable and replayable, processing it in a stateless, fire-and-forget manner was no longer sufficient. Systems needed to compute continuously, maintain state, and recover automatically — all while remaining simple to operate.
Kafka and Kafka Streams are the result of rethinking messaging from first principles, in response to scale, data-driven architectures, and the need to treat events as first-class citizens.
Conclusion
JMS and traditional message queues remain excellent tools for command-based integration and transactional workflows. Kafka was not designed to replace them, but to address a different category of problems.
Kafka introduced the idea of a distributed, durable event log as the backbone of modern systems. Kafka Streams extended that idea by embedding real-time pro
Problem: Spring JMS MessageListener Stuck / Not Receiving Messages
Scenario
A Spring Boot application using ActiveMQ with @JmsListener suddenly stops receiving messages after running for a while. No errors in logs, and the queue keeps growing, but the consumers seem idle.
Setup
-
ActiveMQConnectionFactorywas used. -
The queue (
myQueue) was filling up. -
Restarting the app temporarily fixed the issue.
Investigation
-
Checked ActiveMQ Monitoring (Web Console)
-
Messages were enqueued but not dequeued.
-
Consumers were still active, but not processing.
-
-
Thread Dump Analysis
-
Found that listener threads were stuck in a waiting state.
-
The problem only occurred under high load.
-
-
Checked JMS Acknowledgment Mode
-
Default
AUTO_ACKNOWLEDGEwas used. -
Suspected an issue with message acknowledgment.
-
-
Enabled Debug Logging
-
Added:
-
Found repeated logs like:
-
This hinted at connection issues.
-
-
Tested with a Different Message Broker
-
Using Artemis JMS instead of ActiveMQ resolved the issue.
-
Indicated that it was broker-specific.
-
Root Cause
ActiveMQ’s TCP connection was silently dropped, but the JMS client did not detect it.
-
When the connection is lost,
DefaultMessageListenerContainerdoesn’t always recover properly. -
ActiveMQ does not always notify clients of broken connections.
-
No exceptions were thrown because the connection was technically “alive” but non-functional.
Fix
-
Enabled
keepAlivein ActiveMQ connection -
Forced Reconnection with Exception Listener
-
Implemented:
-
This ensured that if a connection was dropped, the listener restarted.
-
-
Switched to
DefaultJmsListenerContainerFactorywithDMLC-
SimpleMessageListenerContainerwas less reliable in handling reconnections. -
New Configuration:
-
Final Outcome
✅ After applying these fixes, the issue never reoccurred.
🚀 The app remained stable even under high load.
Key Takeaways
-
Silent disconnections in ActiveMQ can cause message listeners to hang.
-
Enable
keepAliveandoptimizeAcknowledgefor reliable connections. -
Use
DefaultJmsListenerContainerFactorywithDMLCinstead ofSMLC. -
Implement an
ExceptionListenerto restart the JMS connection if necessary.
When WebLogic always routes on the same node of the cluster…
Case
Since a couple of days I have met the following issue on my WebLogic server: one application is deployed on a cluster, which references two nodes. Load-balancing (in Round-Robin) is activated for JMS dispatching.
- Yet, all JMS messages are received only by one node (let’s say “the first”), none is received by the other (let’s say “the second”).
- When the 1st node falls, the 2nd receives the messages.
- When the 1st node is started up again, the 2nd keeps on receving the messages.
- When the 2nd node falls, the 1st receives the messages
- and so on
Fix
In WebLogic console go to JMS Modules. In the table of resources, select the connection factory. Then go to the tab Configuration and Load Balance. Uncheck “Server Affinity Enabled“.
Now it should work.
Many thanks to Jeffrey A. West for his help via Twitter.
BEA / JMSExceptions 045101
Case
I used a RuntimeTest to send a JMS message on a WebLogic application, with native WebLogic hosting of the queues, distributed queues to me more accurate. The test was OK.
I clustered the application. When I execute the same test, I get the following error:
[java][JMSExceptions:045101]The destination name passed to createTopic or createQueue "JONATHAN_LALOU_JMS_DISTRIBUTED_QUEUE" is invalid. If the destination name does not contain a "/" character then it must be the name of a distributed destination that is available in the cluster to which the client is attached. If it does contain a "/" character then the string before the "/" must be the name of a JMSServer or a ".". The string after the "/" is the name of a the desired destination. If the "./" version of the string is used then any destination with the given name on the local WLS server will be returned.[/java]
Fix
Since the message error is rather explicit, I tried to add a slash ('/') or a dot ('.') or both ('./'), but none worked.
To fix the issue, you have to prefix the queue name with the JMS module name and an exclamation mark ('!'), in the RuntimeTest configuration file, eg replace:
[xml]<property name="defaultDestinationName" value="JONATHAN_LALOU_JMS_DISTRIBUTED_QUEUE"/>[/xml]
with:
[xml]<property name="defaultDestinationName" value="JmsWeblogicNatureModule!JONATHAN_LALOU_JMS_DISTRIBUTED_QUEUE"/>[/xml]
Mule / MQJMS3000: failed to create a temporary queue from SYSTEM.DEFAULT.MODEL.QUEUE
Case
I have a Mule workflow, of which outbound is a <jms:outbound-endpoint>. The destination queue is hosted on MQ Series and accessed through WebLogic 10.3.3 bridge.
I get the following error:
MQJMS3000: failed to create a temporary queue from SYSTEM.DEFAULT.MODEL.QUEUE
Complete Stacktrace
[java]2010-11-03 13:03:11,421 ERROR mule.DefaultExceptionStrategy – Caught exception in Exception Strategy: MQJMS3000: failed to create a temporary queue from SYSTEM.DEFAULT.MODEL.QUEUE
javax.jms.JMSException: MQJMS3000: failed to create a temporary queue from SYSTEM.DEFAULT.MODEL.QUEUE
at com.ibm.mq.jms.services.ConfigEnvironment.newException(ConfigEnvironment.java:644)
at com.ibm.mq.jms.MQConnection.createTemporaryQueue(MQConnection.java:2958)
at com.ibm.mq.jms.MQSession.createTemporaryQueue(MQSession.java:4650)
at com.ibm.mq.jms.MQQueueSession.createTemporaryQueue(MQQueueSession.java:286)
at org.mule.transport.jms.Jms11Support.createTemporaryDestination(Jms11Support.java:247)
at org.mule.transport.jms.JmsMessageDispatcher.getReplyToDestination(JmsMessageDispatcher.java:483)
at org.mule.transport.jms.JmsMessageDispatcher.dispatchMessage(JmsMessageDispatcher.java:171)
at org.mule.transport.jms.JmsMessageDispatcher.doDispatch(JmsMessageDispatcher.java:73)
at org.mule.transport.AbstractMessageDispatcher$Worker.run(AbstractMessageDispatcher.java:262)
at org.mule.work.WorkerContext.run(WorkerContext.java:310)
at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1061)
at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:575)
at java.lang.Thread.run(Thread.java:619)[/java]
Explanation
A similar issue is described here on Mule support forum. Richard Swart wrote:
This not really mule specific error but an MQ authorization error. The QueueSession.createTemporaryQueue method needs access to the model queue that is defined in the QueueConnectionFactory temporaryModel field (by default this is SYSTEM.DEFAULT.MODEL.QUEUE).
Quick Fix
To fix the issue: on MQ server side, grant visibility to client applications on the default SYSTEM.DEFAULT.MODEL.QUEUE
Tutorial: from an application, make a clustered application, within WebLogic 10
Abstract
You have a non-clustered installation, on the host with DNS name jonathanDevDesktop, with an admin (port: 7001), a muletier (port: 7003) and a webtier (port: 7005) instances.
You need set your muletier as a clustered installation, with two nodes, on the same server. The second node will dedeployed on port 7007.
We assume you have a configured JMS Modules (in our case: JmsMqModule, even though the bridge between WebLogic and MQ has no impact here).
Process
Batches
- Copy
$DOMAINS\jonathanApplication\start-muletier-server.bat"as$DOMAINS\jonathanApplication\start-muletier-server-2.bat" - Edit it:
- Possibly, modify the debug port (usually:
5006) - Replace the line
call "%DOMAIN_HOME%\bin\startManagedWebLogic.cmd" muletier t3://jonathanDevDesktop:7001
with
call "%DOMAIN_HOME%\bin\startManagedWebLogic.cmd" muletier2 t3://jonathanDevDesktop:7001
- Possibly, modify the debug port (usually:
Second Node Creation
- Following points are not required.
- Copy the folder
%DOMAIN_HOME%\servers\muletieras%DOMAIN_HOME%\servers\muletier2 - Delete the folders
%DOMAIN_HOME%\servers\muletier2\cacheand%DOMAIN_HOME%\servers\muletier2\logs
- Copy the folder
- Stop the server
muletier - On WebLogic console:
- Servers > New > Server Name:
muletier2, Server Listen Port:7007> CheckYes, create a new cluster for this server.> Next - Name:
jonathanApplication.cluster.muletier> Messaging Mode:Multicast, Multicast Address:239.235.0.4, Multicast Port:5777 - Clusters >
jonathanApplication.cluster.muletier> Configuration > Servers > Select a server:muletier - Clusters >
jonathanApplication.cluster.muletier> Configuration > Servers > Select a server:muletier2
- Servers > New > Server Name:
- Start the instances of
muletierandmuletier2in MS-DOS consoles. - On the WebLogic console:
- Deployments >
jonathanApplication-web(the mule instance) > Targets > check “jonathanApplication.cluster.muletier” and “All servers in the cluster” > Save
- Deployments >
- On the
muletier2DOS console, you can see the application is deployed.
JMS Configuration
The deployment of JMS on clustered environment is a little tricky.
- On WebLogic console: JMS Modules >
JmsMqModule> Targets > check “jonathanApplication.cluster.muletier” and “All servers in the cluster“ - Even though it is not required, restart your muletiers. Then you can send messages either on port 7003 or 7007, they will be popped and handled the same way.
Tutorial: Use WebShere MQ as JMS provider within WebLogic 10.3.3, and Mule ESB as a client
Abstract
You have an application deployed on WebLogic 10 (used version for this tutorial: 10.3.3). You have to use an external provider for JMS, in our case MQ Series / WebSphere MQ.
The client side is a Mule ESB launched in standalone.
Prerequisites
You have:
- a running WebLogic 10 with an admin instance and an another instance, in our case: Muletier.
- a file
file.bindings, used for MQ.
JARs installation
- Stop all your WebLogic 10 running instances.
- Get the JARs from MQ Series folders:
providerutil.jarfscontext.jardhbcore.jarconnector.jarcommonservices.jarcom.ibm.mqjms.jarcom.ibm.mq.jar
- Copy them in your domain additional libraries folder (usually:
user_projects/domains/jonathanApplication/lib/) - Start WebLogic 10 admin. A block like this should appear:
[java]<Oct 15, 2010 12:09:21 PM CEST> <Notice> <WebLogicServer> <BEA-000395> <Following extensions directory contents added to the end of the classpath:
C:\win32app\bea\user_projects\domains\jonathanApplication\lib\com.ibm.mq.jar;C:\win32app\bea\user_projects\domains\jonathanApplication\lib\com.ibm.mqjms.jar;C:\win32app\bea\user_projects\domains\jonathanApplication\lib\commonservices.jar;C:\win32app\bea\user_projects\domains\jonathanApplication\lib\connector.jar;C:\win32app\bea\user_projects\domains\jonathanApplication\lib\dhbcore.jar;C:\win32app\bea\user_projects\domains\jonathanApplication\lib\fscontext.jar;C:\win32app\bea\
user_projects\domains\jonathanApplication\lib\providerutil.jar>[/java]
Config
- Get
file.bindings, copy it intouser_projects/domains/jonathanApplication/config/jms, rename it as.bindings(without any prefix) - Launch the console, login
JMS>JMS Modules>Create JMS System Module>Name: JmsMqModule. Leave other fields empty. >Next> target serverMuleTier>Finish- Select
JmsMqModule>New>Foreign Server> Name:MQForeignServer> keep check MuleTier >Finish- Select MQForeignServer >
- JNDI Initial Context Factory: replace
weblogic.jndi.WLInitialContextFactorywith:com.sun.jndi.fscontext.RefFSContextFactory - JNDI Connection URL: set the URI of the folder containing the
.bindingsfile, eg:file://c/win32app/bea/user_projects/domains/jonathanApplication/config/jms
- JNDI Initial Context Factory: replace
- Tab
Connection Factories> New >- Name:
MQForeignConnectionFactory - Local JNDI Name: the JNDI name on WebLogic side, eg:
jonathanApplication/jms/connectionFactory/local(convention I could observe: separator on WebLogic: slash'/'; unlike clients for which the separator in a dot'.') - Remote JNDI Name: the JNDI name on MQ side, eg:
JONATHAN_APPLICATION.QCF - OK
- Name:
- Tab
Destinations> New >- Queue of requests:
- Name:
JONATHAN.APPLICATION.REQUEST - Local JNDI Name:
JONATHAN.APPLICATION.REQUEST - Remote JNDI Name:
JONATHAN.APPLICATION.REQUEST
- Name:
- Queue of response:
- Name:
JONATHAN.APPLICATION.REPONSE - Local JNDI Name:
JONATHAN.APPLICATION.REPONSE - Remote JNDI Name:
JONATHAN.APPLICATION.REPONSE
- Name:
- NB: usually, MQ data are upper-cased and Java’s JNDI names are low-cased typed ; anyway (because of Windows not matching case?) here we use uppercase in for both names.
- Queue of requests:
- Select MQForeignServer >
Mule
This part of the tutorial deals with a case of Mule ESB being your client application (sending and/or receiving JMS messages).
- Get the archive
wlfullclient.jar(56MB). Alternatively, you can generate it yourself: go to the server/lib directory of your WebLogic installation (usually:C:\win32app\bea\wlserver_10.3\server\lib, and run:java -jar wljarbuilder.jar - Copy the archive into
$MULE_HOME/lib/user - Copy the seven jars above (
providerutil.jar,fscontext.jar,dhbcore.jar,connector.jar,commonservices.jar,com.ibm.mqjms.jar,com.ibm.mq.jar) into the same folder:$MULE_HOME/lib/user - You can launch the mule. The config file is similar to any other configuration using standard JMS.
Tutorial: Tomcat / OpenJMS integration
Install and Config
- Let’s assume you would like to run OpenJMS and Tomcat on the same server, eg
myLocalServer - Download OpenJMS from this page.
- Unzip the archive, extract it to
C:\exe\openjms-0.7.7-beta-1 - Set an environment variable:
set OPENJMS_HOME=C:\exe\openjms-0.7.7-beta-1 - Take the archive
$OPENJMS_HOME/lib/openjms-tunnel-0.7.7-beta-1.war- copy it to:
$CATALINA_HOME/webapps - rename it as:
openjms-tunnel.war
- copy it to:
- Edit
OPENJMS_HOME/config/openjms.xml:- Before the ending tag
</connectors>, add the block:<Connector scheme="http"> <ConnectionFactories> <ConnectionFactory name="HTTPConnectionFactory"/> </ConnectionFactories> </Connector>
- After the ending tag
</connectors>, add the block:<HttpConfiguration port="3030" bindAll="true" webServerHost="myLocalServer" webServerPort="8080" servlet="/openjms-tunnel/tunnel"/>
- Before the ending tag
Run applications
- Launch
$OPENJMS_HOME/bin/startup.bat. The following output is expected:OpenJMS 0.7.7-beta-1 The OpenJMS Group. (C) 1999-2007. All rights reserved. http://openjms.sourceforge.net 15:15:27.531 INFO [Main Thread] - Server accepting connections on tcp://myLocalServer:3035/ 15:15:27.547 INFO [Main Thread] - JNDI service accepting connections on tcp://myLocalServer:3035/ 15:15:27.547 INFO [Main Thread] - Admin service accepting connections on tcp://myLocalServer:3035/ 15:15:27.609 INFO [Main Thread] - Server accepting connections on rmi://myLocalServer:1099/ 15:15:27.609 INFO [Main Thread] - JNDI service accepting connections on rmi://myLocalServer:1099/ 15:15:27.625 INFO [Main Thread] - Admin service accepting connections on rmi://myLocalServer:1099/ 15:15:27.625 INFO [Main Thread] - Server accepting connections on http-server://myLocalServer:3030/ 15:15:27.625 INFO [Main Thread] - JNDI service accepting connections on http-server://myLocalServer:3030/ 15:15:27.625 INFO [Main Thread] - Admin service accepting connections on http-server://myLocalServer:3030/ - Launch Tomcat. A webapp with path
/openjms-tunneland display name “OpenJMS HTTP tunnel” should appear.
Checks
-
- Open Console² or an MS-DOS prompt
- Go to
$OPENJMS/examples/basic
- Run:
build. This will compile all the examples.
Check that OpenJMS is OK:
-
- Edit
jndi.properties,- Comment the property
java.naming.provider.url
- Add the line:
java.naming.provider.url=tcp://myLocalServer:3035
- Comment the property
- Run:
run Listener queue1
- Open a second tab
- Run:
run Sender queue1 5
- Expected output in the second tab:
C:\exe\openjms-0.7.7-beta-1\examples\basic>run Sender queue1 5 Using OPENJMS_HOME: ..\.. Using JAVA_HOME: C:\exe\beaweblo922\jdk150_10 Using CLASSPATH: .\;..\..\lib\openjms-0.7.7-beta-1.jar Sent: Message 1 Sent: Message 2 Sent: Message 3 Sent: Message 4 Sent: Message 5
- Expected output in the first tab:
C:\exe\openjms-0.7.7-beta-1\examples\basic>run Listener queue1 Using OPENJMS_HOME: C:\exe\openjms-0.7.7-beta-1 Using JAVA_HOME: C:\exe\beaweblo922\jdk150_10 Using CLASSPATH: .\;C:\exe\openjms-0.7.7-beta-1\lib\openjms-0.7.7-beta-1.jar Waiting for messages... Press [return] to quit Received: Message 1 Received: Message 2 Received: Message 3 Received: Message 4 Received: Message 5
- Expected output in the second tab:
- Edit
Check that OpenJMS/Tomcat link is OK:
Manual Check
-
- Stop the
Listenerinstance launched sooner - Edit
jndi.properties,- Comment the line
java.naming.provider.url=tcp://myLocalServer:3035
- Add the line:
java.naming.provider.url=http://myLocalServer:8080
(this is Tomcat manager URL)
- Comment the line
- Run:
run Listener queue1
- Open a second tab
- Run:
run Sender queue1 5
- The expected output are the same as above.
- Stop the
GUI Check
- Stop the
Listenerinstance launched sooner - Ensure
jndi.propertiescontains the line:java.naming.provider.url=http://myLocalServer:8080
- Run:
$OPENJMS_HOME/bin/admin.bat
- A Swing application should start.
- Go to:
Actions > Connections > Online
- The queue
queue1should be followed by a ‘0’. - Run:
run Sender queue1 50
-
Action > Refresh
- The queue
queue1should be followed by a ’50’.
-
- Run:
run Listener queue1
-
Action > Refresh
- The queue
queue1should be followed by a ‘0’.
-
java.io.StreamCorruptedException: invalid type code: 31
Context:
Client-server communication over JMS.
Stacktrace:
[java]Caused by: java.rmi.UnmarshalException: failed to unmarshal class weblogic.security.acl.internal.AuthenticatedUser; nested exception is:
java.io.StreamCorruptedException: invalid type code: 31
at weblogic.rjvm.ResponseImpl.unmarshalReturn(ResponseImpl.java:203)
at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:224)
at weblogic.common.internal.RMIBootServiceImpl_921_WLStub.authenticate(Unknown Source)
at weblogic.security.acl.internal.Security$1.run(Security.java:185)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:147)
at weblogic.security.acl.internal.Security.authenticate(Security.java:181)
at weblogic.jndi.WLInitialContextFactoryDelegate.authenticateRemotely(WLInitialContextFactoryDelegate.java:726)
at weblogic.jndi.WLInitialContextFactoryDelegate.pushSubject(WLInitialContextFactoryDelegate.java:659)[/java]
Explanation – Fix
The client JVM was in version 1.6, the server was in 1.5.
To fix the issue, the client must be run with Java 1.5.
Possibly, the client may laucnh the JVM with the option -Dsun.lang.ClassLoader.allowArraySyntax=true.
javax.naming.ConfigurationException / java.net.MalformedURLException
Context
I have to send JMS messages on queues on clustered servers: t3://firstServer:1234 and t3://secondServer:5678.
The destination queues are retrieved in Spring, thanks to a property like:
[xml]<property name="providerURL" value="t3://firstServer:1234,t3://secondServer:5678"/>[/xml]
Error:
I receive the following error:
[java]javax.naming.ConfigurationException [Root exception is java.net.MalformedURLException: port expected: t3://firstServer:1234,t3://secondServer:5678][/java]
Explanation and fix:
When you send messages on many queues, you must not repeat the protocol (here: t3://)! Fixing the issue is very simple: you have to remove the second t3:// in Spring property:
[xml]<property name="providerURL" value="t3://firstServer:1234,secondServer:5678"/>[/xml]