The most obvious place to check which TCP ports your WebSphere MQ channel listeners are configured to use is:
DISPLAY LISTENER(*) PORT
in runmqsc.
But that’s not the only place to check. It’s easy to miss one of the places, so in this post, I’ll quickly outline the different factors that affect the TCP port that a channel listener will use.
- PORT attribute of the channel listener
The most obvious place, but the easiest one to start with. An explicit PORT attribute in a channel listener definition overrides anything else. This is the port that the listener will use.If this is set to 0, then the next place to check is:
- Port setting in the qm.ini file or Windows Registry
On Windows, this can be found in system Registry at:
HKEY_LOCAL_MACHINE\SOFTWARE\IBM\MQSeries\CurrentVersion\
Configuration\QueueManager\QMGRNAME\TCP\PortOn UNIX, it will be found in the TCP stanza of the queue manager’s qm.ini file /var/mqm/qmgrs/QMGRNAME/qm.ini:
TCP: Port = 9999
If you are using WebSphere MQ Explorer, the value can be found in the Queue Manager Properties dialog on the TCP page.
Just right-click on the queue manager and click on ‘Properties’, then click on TCP.
This is the default port number for the queue manager, and the value here will be used for any channel listeners with a PORT value of 0.
If this value is not set (in Windows, if the registry key is not present or in UNIX if the Port attribute is not included in the qm.ini file), then the value that will be used is:
- mqseries entry in etc/services
On Windows, this can be found at:C:\\WINDOWS\\system32\\drivers\\etc\\services
On UNIX, this can be found at:
/etc/services
If an mqseries entry is contained in this file:
mqseries 12345/tcp # mqseries default port number
then this port number is used for channel listeners with a PORT attribute of 0 on queue managers without a default port number in the Registry (or qm.ini).
If an mqseries entry is not included in the etc/services file, then the value that is used is:
- 1414
The default for WebSphere MQ channel listeners. In the absence of the above settings, we use port 1414.
.
That’s not quite the whole story, though. There are a few quirks that it’s also worth being aware of:
WebSphere MQ Explorer users
The first time you look at the Queue Manager’s default Port value with WMQ Explorer, it shows ‘1414’. But as implied above, new queue managers do not actually have a TCP port setting. This is just showing you the effective port value because of the WebSphere MQ default. (This can be verified in regedit by looking in the Registry – this key won’t exist until the first time you alter the queue manager’s TCP Port setting in WMQ Explorer.) Once altered in WebSphere MQ Explorer, the value is considered to be explicitly set – even if you set it back to 1414 after changing it, you will notice that the registry key remains.
So, “1414” in the WMQ Explorer can either mean 1414 has been set, or that there is nothing set and WebSphere MQ is defaulting to 1414.
SYSTEM.DEFAULT.LISTENER.TCP
A channel listener “SYSTEM.DEFAULT.LISTENER.TCP” is created when a queue manager is created. This is created by WMQ (whether the queue manager is created by WMQ Explorer, or at the command line). It does not specify a port number, and so would use the queue manager’s default port number setting if started.
However, it is not a listener which I would expect to see people using – it is created for use by the queue manager. Most importantly, it serves as a template for the creation of new listener objects. Notice, for example, that if you delete this object, you are therefore unable to create new channel listener objects as there is nothing to use as a template. As an important system object, this is best left alone and essentially ignored for WMQ’s own internal use.
Creating queue managers with WebSphere MQ Explorer
A channel listener may be created by the WMQ Explorer ‘Create Queue Manager’ wizard. Step 4 (‘Enter listener options’) of the ‘Create Queue Manager’ wizard specifies the creation of a channel listener with an explicit port number specified. The listener which it will create is named LISTENER.TCP.
By default, the option to do this is enabled. (Note that this is an optional feature of the WMQ Explorer wizard and is not done if creating queue managers at the command line.)
This is not setting the default port number for the queue manager. It is just specifying the port number to use for a single channel listener. As this is a channel listener with it’s own port number explicitly provided, it is not affected by the queue manager’s default port number setting.
Only one listener on a port at a time
Finally, only one listener can use a port number on a single machine at any one time. Although multiple listeners can be defined to use the same port number, only one can be running at any one time. Any attempts to start a second listener on a port number already in use (whether by an automatically or manually defined listener) will fail – and that second listener’s status will remain ‘Stopped’.
.
.
Update: I should have pointed out that a discussion about channel listener objects is only relevant for WebSphere MQ version 6 and above.
However, where I talk about channel listener object definitions, the same is true for channel listener processes started manually using runmqlsr
:
- runmqlsr commands started with an explicit port number (using
-p
) are equivalent to setting the PORT attribute of a channel listener object - runmqlsr commands started without an explicit port number (omitting
-p
) are equivalent to channel listener objects with a PORT attribute of 0
Update 2: Inserted bullet-point section about etc/services as suggested by Russell below.
17 comments
Comments feed for this article
February 7, 2007 at 12:00 pm
Russell Finn
There’s actually another place from which the default port number can be dervied: /etc/services (or C:\WINDOWS\system32\drivers\etc\services on Windows).
If, in this file, you have a line like:
mqseries 12345/tcp # mqseries default port number
then this is used in preference to the built-in default of 1414.
If there is an entry in the queue manager qm.ini (or TCP registry folder on Windows) then that takes precedence over the services file.
To summarize, the port number used comes from these places in this order:
The LISTENER object / the runmqlsr -p parameter
qm.ini / TCP folder in registry
/etc/services
1414
Also note that the default port processing applies to the CONNAME of sender-type channel definitions that do not specify a port number.
February 7, 2007 at 12:30 pm
Dale Lane
Ah – good point! I’d completely forgotten about that.
Still, I did say that “it’s easy to miss one of the places”, so I guess in a round-about way you’ve helped prove my point 😉
I’ve updated the post to include services… the post is getting longer than I originally anticipated it would be!
Thanks
Dale
February 7, 2007 at 6:34 pm
peterbroadhurst
Further proving Dale’s point; there’s yet another on Linux systems….
/etc/xinetd.conf
or
/etc/xinetd.d/
Here’s a link to the WMQ V6.0 info-center:
http://publib.boulder.ibm.com/infocenter/wmqv6/v6r0/topic/com.ibm.mq.csqzae.doc/lxusexinetd.htm
And a technote:
http://www-1.ibm.com/support/docview.wss?rs=171&uid=swg21180725
March 2, 2007 at 7:12 pm
Doug Eckert
I’ve got a case opened for the below, but found this thread and though someone may have some insight. We get the following
sbkdjiblade1# /usr/mqm/bin/runmqlsr -m QM_djiblade -t tcp -p 1414
5724-H72 (C) Copyright IBM Corp. 1994, 2005. ALL RIGHTS RESERVED.
03/02/07 14:10:21 AMQ9915: The IP protocol ‘IPv4/IPv6’ is not available on the system.
Obviously, the IPv4 is available as I’m telnet’d in. I’ve been looking at truss output to see how it’s determining IPv4/IPv6 isn’t available and am coming up empty.
Any thoughts?
March 4, 2007 at 10:48 pm
Dale Lane
This looks like something that’d be best handled by the guys in Service. If it’s a problem in WebSphere MQ that is causing an error message to be output in error, then they’ll write an APAR so that this can be documented for others.
Sounds like an interesting one – I look forward to reading it!
March 19, 2007 at 7:48 pm
Shirley Fraser
Recommended practice?…..
If I have created LISTENER.TCP with port 1881 (and, it is the only port I intend to associate with the queue manager), is it best practice to set 1881 as the default queue manager port so that the MQExplorer display is more useful?
I guess the down side of that would be that 1881 would be in the template new listener objects. But then, I can’t forsee why I would ever want a default port in a template.
And what if… I did intend to create a second listener object for this queue manager on a new port…. Would it be better to set the default queue manager port as 0000 or something, to tip me off that there are more than one listener objects?
I hope I have understood the blog correctly and my questions make sense. Thanks for this blog, it is very useful.
March 20, 2007 at 8:52 am
John Manning
Multiple channel question….
We have a queue manager running (in solaris 9) with a single channel for multiple applications with persistent and non-persistent messages. Lately, we’ve had problems that the transmit queue appears to block between the two qms.
Questions:
1. Can a channel block and for what reasons?
2. We have been told to create multiple channels (same listener port ) to avoid this problem. I was under the understanding that multiple channels will only logically split the traffic and has no influence on the performance. What is correct?
March 20, 2007 at 3:04 pm
peterbroadhurst
Great questions guys, here are my two cents:
Shirley
If you have a machine with lots of local queue managers (and a local copy of the WMQ Explorer) then you are right something can be gained by:
– Altering the ‘Default Port’ in the ‘TCP’ stanza of the qmgr config (qm.ini/registry)
– Defining a listener object on each with PORT(0) – so it picks up the default
– Moving the ‘TCP Port’ column in the Queue Managers view of the WMQ Explorer to the front
On the other hand, if you connect remotely (over a client connection) to your queue managers – the ‘TCP Port’ column will be blank.
This is because the WMQ Explorer can’t remotely read the qm.ini/registry.
The ‘Listeners’ folder would show zero in the ‘Port’ column, because that’s how the object is defined.
To find out the port the queue manager is actually listening on, you’d need to right click on the listener object and select ‘Status…’.
Saying that, this is probably always the best way to check the actual port of a queue manager anyway – as this is live status, rather than an object definition (that could have changed since the listener was started).
John
There’s not a one-size-fits-all answer here.
If you have large and small, persistent and non-persistent, high QOS vs low QOS, messages flowing between two queue managers, multiple channels is often the correct answer.
The main reason is how channels use batches to ensure exactly-once delivery of messages.
Basically, when transferring persistent messages over a channel (or non-persistent messages with NPMCLASS(HIGH)), the channel uses two units of work (one on each qmgr) coordinated between the two MCAs to commit messages in batches.
This can introduce delays as follows…
Persistent delaying non-persistent:
Non-persistent messages (with default NPMCLASS(LOW)) usually are sent over a channel immediately, and very fast – as they require no confirmation from the other side, and no units of work to be committed.
However, if you have persistent messages flowing down the same channel, then there will be times (at the end of each batch) where the channel is tied up confirming the batch, and committing units of work – hence unable to squirt more non-persistent messages over the channel.
Large persistent messages delaying small persistent messages:
If you have both large and small persistent messages flowing over a channel, you are likely to have both large and small messages in each batch.
This means that a small message in the same batch as a large messages may not become available until after the large messages have travelled over the channel and been committed to their destination queue(s) on the remote queue manager as part of the same unit-of-work.
Delays while WMQ fills a batch
The default for the BATCHINT parameter is zero – which means that as soon as a transmission queue becomes empty the batch is committed.
This default is intended to minimise latency for persistent messages, but is not always the best for everybody.
I can think of two cases in which it might be better to let the batch become more full before committing it:
– The more time a channel spends committing batches (lower batch size) the more delays are introduced for non-persistent messages travelling over that channel. So if latency of persistent messages is less of an issue for you than latency of non-persistent messages on the same channel, increasing the BATCHINT can be useful.
– Committing a batch is expensive. If persistent messages arrive on the xmit queue with only very short delays between them, then a small BATCHINT (smaller than the time required to do a send+receive ‘line-turnaround’ over your network and commit a single message) can actually reduce latency.
Trial and error in a realistic test environment is often the best way to tune WMQ channels for ultimate performance.
This quick summary doesn’t cover all the ins and outs, and I’m sure this is a much discussed topic and other people can add info and provide links, but this hopefully gives you a starting point.
If you are concerned the delays you are experiencing are well above and beyond the type described above, then this could be an issue best discussed with IBM – through customer support.
March 21, 2007 at 7:38 am
John Manning
Hello Peter and thanks for the information.
You wrote: …multiple channels is often the correct answer.
Are there benefits in defining a listener port for each channel or is it better to have one listener running with multiple channels?
March 21, 2007 at 1:36 pm
peterbroadhurst
I can’t personally see much benefit from a performance perspective – two channels (even started on the same listener) have separate TCP/IP sockets once started.
The only possibility I can think of, is if you have lots and lots of queue managers, or client connections, connecting and disconnecting to one queue manager. You might get important channels to start quicker with a second listener dedicated to just those channel.
I’d expect the most common reason for multiple listeners, is network topologies. For example, one listening on an firewall opened port on one network, and one listening on a different firewall opened port on another network.
March 21, 2007 at 2:26 pm
John Manning
Thanks again.
We will implement the multi channel concept (on the same listener) and then try to determine which and why our messages seem to periodically block on the transmit queue. In some cases the messages hang around for minutes with absolutely no TCP traffic (we ran a tcp trace) on the channel then bang, the queue is emptied.
March 29, 2007 at 7:51 am
John Manning
Interesting…. As I mentioned above, our transmit queue was hanging for minutes at times of high volumes –for sometimes minutes. As it turns out, the situation started after the other queue manager was reset and ours wasn’t. After resetting our queue manager (stop and start) the problem hasn’t reoccurred.
Is it possible that the channel was in some sort of undefined state?
And if that was the case, where would I see the error?
March 30, 2007 at 2:06 am
Channels - status and troubleshooting « a Hursley view on WebSphere MQ
[…] March 30th, 2007 in webspheremq by peterbroadhurst A February post on listeners lead on to some discussion on the operation of channels, and diagnosing issues. I thought this […]
March 30, 2007 at 2:10 am
peterbroadhurst
Hi John – thought your question could be interesting to others, so have put my thoughts in a new entry.
March 30, 2007 at 10:24 am
John Manning
Hello Peter,
Thanks a lot for the information. I will read through the referenced documents to see what I can find to analyze our situation. –Even though we are currently running 5.3 (CSD12) and the documents are for V6.0, I’m sure things haven’t changed that much.
At first glance…
I didn’t see any errors in the /var/mqm/errors and I unfortunately can’t see anything in the /var/mqm/qmgrs//errors other than the continuous paired AMQ7467 & 68 messages (which are printed in 1 sec intervals).
Thanks again –John
August 12, 2007 at 3:31 am
Paula Ayala
Hello, I am a Tivoli Consultant from Mexico, trying to implement Omegamon XE for MQ monitoring solution at a customer. The problem is that he never told us that he has MQ version 5.3 and he wants to monitor listener ports, and know how he can relate them to each Channel? (windows platform)
Do you know if the information you provided on listener ports is also valid for MQ 5.3? If not, do you have any idea how can I achieve this?
Thanks in advance for your help,
Paula
January 15, 2008 at 1:04 pm
zdenek
Thanks for this article, it helped me resolve my problem.