Queue Manager performance tuning is probably one of the most important things to get right when using WebSphere MQ. Sometimes it is not easy to figure out what settings would work best for your install. In this post I will take you through some of the performance settings you can use when creating your queue managers.

Disclaimer: This information is used at your own risk, in no way do I accept responsibility for anything that goes wrong with your installation.

The options that I typically use for a performance setup are:

  • -h – the number of handles that an application can have open at any one time.
  • -lc – use circular logging for persistent messaging.
  • -lf – the size of the log files (based on 4KB units).
  • -lp – the number of primary log files to use.

Typically I will set -h to around 50000 which is more then my tests need, but ensures that I am not constrained to the 256 default. Using circular logging is advised, but be warned that long running transactions that span multiple log files can cause performance degradation to occur. The default log size is set at 1024 units of 4KB, which works out at 4MB of log. I typically set this value to 16384 which gives me a log file size of 64MB, big enough for my performance stress tests. -lp has a default of 3, again I increase this to be 16, giving me 16 log files, each 64MB in size.

The above give us a queue manager that can handle all the clients that I am going to throw at it, allows a large amount of space for persistent messages in the log files. Once the queue manager is created, we can edit parameters within the qm.ini (or Windows Registry) to get actual performance gains over a default installation.

There are a couple of values you can add to the TuningParameter stanza of the qm.ini file, DefaultQBufferSize and DefaultPQBufferSize. The default for nonpersistent queue buffer size (DefaultQBufferSize) is 64K per queue. The maximum that you can set this value is to 100MB, although I strongly advise you to make sure that you have enough real memory, as each queue would have 100MB of real memory (which could crash your machine!). For my performance measurements we use a much more realistic 1MB per queue, which provides us with ample space for our performance tests. Just to reiterate that setting this value in the qm.ini will affect each queue, make sure you have enough memory for the value you are setting! The changes in the qm.ini will only take effect after a queue manager restart. The setting for a persistent messaging buffer called DefaultPQBuffer size works in exactly the same way, your messages are still copied to a log, but if they are requested by a client and are still in the buffer then they will be read from there, rather then re-read from the logs.

In the Channels stanza of qm.ini MQIBindType can be set to: MQIBindType=FASTPATH which I understand from a technical perspective means that the channel processes are no longer separate from the queue manager processes. From a performance perspective I categorically know that this gives significant savings of CPU and memory resources compared to the default. The downside of this is that User Exits could corrupt the queue manager memory (if used), and it is also harder to diagnose problems with the channels. But from a pure performance perspective, this is a good option.

No doubt there are other parameters that we could use, increasing the number of channels, increase the number of maximum active channels, but the above parameters are the key ones that give my performance tests a big boost over the default settings when using JMS Client applications.

Update: Finished the post – see comments below.

Advertisements