Page 1 of 2

COBOL Program performance tuning tips

Posted: Wed Nov 22, 2006 12:08 pm
by Krishna
Hi All,


This Thread is for COBOL Performance tuning tips discussion.
Requesting members to participate in this thread.


Regards,
Krishna
www.geocities.com/srcsinc
www.ibmmainframeguru.com
www.jacharya.com

COBOL Performance Tuning Tips

Posted: Wed Nov 22, 2006 12:18 pm
by Madhusudana Reddy Mandli
Hi ,

This below tips while doing sort in COBOL may be useful to the members.

COBOL Performance Tuning Tips -



If at all possible avoid doing a sort within a COBOL program. COBOL sorts are very inefficient. If you must do a sort in a COBOL program, specifying the FASTSRT compiler option may speed up the sorting process.



Rounding numbers takes longer than the calculation, so try to avoid rounding numeric values.


Thanks,
Madhu

Posted: Wed Nov 22, 2006 5:31 pm
by justaprogrammer
If at all possible avoid doing a sort within a COBOL program. COBOL sorts are very inefficient. If you must do a sort in a COBOL program, specifying the FASTSRT compiler option may speed up the sorting process.
I agree to what Madhu had adviced over here with a little variation.
if you are using INPUT/OUTPUT procedure then FASTSRT compiler option will not increase the performance.In such cases it is better to use the DFSORT control statements like INREC,OUTREC,SUM,SKIPREC,INCLUDE or OMIT ,STOPAFT etc and put them under the ddname SORTCNTL or IGZSRTCD.

Also you can go through this documnet to know more about performance tuning of your cobol programs.
http://publibz.boulder.ibm.com/cgi-bin/ ... 0617172746

Posted: Mon Jun 30, 2008 11:47 am
by Captain Nero
Hi,

Thanks for your advice, can you please give a short example of how to use this DFSORT commands in COBOL, it will be of great help.

Regards,

Posted: Fri Jan 09, 2009 11:26 am
by Som_TCS
Captain Nero wrote:Hi,

Thanks for your advice, can you please give a short example of how to use this DFSORT commands in COBOL, it will be of great help.

Regards,
Here r some tips of performance tuning:

1. When performing arithmetic, always use signed numeric fields. COBOL performs faster with signed fields than unsigned fields

2.When writing to variable length blocked sequential files, use the APPLY WRITE-ONLY clause for the file or use the AWO compiler option. This can reduce the number of calls to Data Management Services to handle the I/Os.

3. If you use SEARCH in COBOL, it is better to use SEARCH ALL ( Binary Search)

4. Using indexes to address a table is more efficient than using subscripts since the index already contains the displacement from the start of the table and does not have to be calculated at run-time.
Performance considerations for indexes vs subscripts (PIC S9(8)):
using binary data items (COMP) to address a table is 56% slower than using indexes
using decimal data items (COMP-3) to address a table is 426% slower than using indexes
using DISPLAY data items to address a table is 680% slower than using indexes

5. For loop control variables use binary data items
Performance considerations for loop control variables (PIC S9(8)):
using a decimal (COMP-3) is 320% slower than using binary (COMP)
using a DISPLAY is 890% slower than using binary (COMP)

6.BLOCK CONTAINS 0 RECORDS on FD and BLKSIZE = 0 on DCB - if u r accessing the file sequentially..

There r some complier option which affects run time performance...Can anyone put some lights over that...I am not sure of those...

anyway guys...njoy... :D

Posted: Thu Apr 23, 2009 9:42 pm
by prm
Compiler Options also play a significant role in "Performance Tuning" of cobol programmes.

Many compiler options do have a far reaching performance implications on the program during runtime, especially the ARITH, AWO, DYNAM, FASTSRT, NUMPROC, OPTIMIZE, RENT, SSRANGE, TEST, THREAD, and TRUNC options.

Below is the explanation for each of the compiler options mentioned above and how they impact the performance:-

ARITH - EXTEND or COMPAT :
The ARITH compiler option allows you to control the maximum number of digits allowed for numeric variables in your program.ARITH(EXTEND), the maximum number of digits is 31 - Slower. ARITH(COMPAT), the maximum number of digits is 18 - Faster

AWO or NOAWO :
APPLY WRITE-ONLY processing for physical sequential files with VB format.
APPLY WRITE-ONLY , the file buffer is written to the output device when there is notenough space in the buffer for the next record.Without APPLY WRITE-ONLY, the file buffer is written to the output device when there is not enough space in the buffer for the maximum size record.
If the application has a large variation in the size of the records to be written, using APPLY WRITE-ONLY can result in a performance savings since this will generally result in fewer I/O calls.
NOAWO is the default.

DATA(24) or DATA(31) :
Specifies whether reentrant program data areas reside above or below the 16-MB line. With DATA(24) reentrant programs must reside below the 16-MB line. With DATA(31) reentrant programs can reside above the 16-MB line.
DATA(31) is the default.

DYNAM or NODYNAM :
DYNAM ,Changes the behavior of CALL literal statements to load subprograms dynamically at run time. Call path length is longer - slower NODYNAM ,CALL literal statements cause subprograms to be statically link-edited in the load module. Call path length is - faster
NODYNAM is the default.

FASTSRT or NOFASTSRT :
FASTSRT ,Specifies fast sorting by the IBM DFSORT licensed program. - FasterNOFASTSRT ,Specifies that Enterprise COBOL will do SORT or MERGE input/output. - Slower
NOFASTSRT is the default

NUMPROC - NOPFD, MIG, or PFD :
Handles packed/zoned decimal signs as follows:
NUMPROC(NOPFD), sign fix-up processing is done for all references to these numeric data items. NUMPROC(MIG), sign fix-up processing is done only for receiving fields (and not for sendingfields) of arithmetic and MOVE statements. NUMPROC(PFD), the compiler assumes that the data has the correct sign and bypasses this sign fix-up processing.
For performance sensitive applications, NUMPROC(PFD) is recommended when possible.NUMPROC(NOPFD) is the default.

OPTIMIZE(STD), OPTIMIZE(FULL), or NOOPTIMIZE :
Optimizes the object program. OPTIMIZE has the suboptions of (STD/FULL). OPTIMIZE(FULL) provides improved runtime performance, over both the OS/VS COBOL and VS COBOL II OPTIMIZE option, because the compiler discards unused data items and does not generate code for any VALUE clauses for these data items.
NOOPTIMIZE is generally used while a program is being developed when frequent compiles arenecessary. NOOPTIMIZE also makes it easier to debug a program since code is not moved;
NOOPTIMIZE is the default.

RENT or NORENT :
Using the RENT compiler option causes the compiler to generate some additional code to ensure that the program is reentrant.
On the average, RENT was equivalent to NORENT.
RENT is the default.

RMODE - AUTO, 24, or ANY :
Allows NORENT programs to have RMODE(ANY).
When using NORENT, the RMODE option controls where the WORKING-STORAGE will reside.With RMODE(24), the WORKING-STORAGE will be below the 16 MB line. With RMODE(ANY), theWORKING-STORAGE can be above the 16 MB line.
RMODE(AUTO) is the default.

SSRANGE or NOSSRANGE :
SSRANGE - At run time, checks validity of subscript, index, and reference modification references. Its slower than NOSSRANGE.
NOSSRANGE is the default.

TEST or NOTEST :
TEST - Produces object code usable by Debug Tool for the product. It is slower than NOTEST.
NOTEST is the default.

THREAD or NOTHREAD :
THREAD -Enables a COBOL program for execution in a run unit with multiple POSIX threads or PL/I tasks. It is slower than NOTHREAD.
NOTHREAD is the default.

TRUNC - BIN, STD, or OPT :
Truncates final intermediate results.
TRUNC(STD) Truncates numeric fields according to PICTURE specification of the binary receiving field TRUNC(OPT) Truncates numeric fields in the most optimal way TRUNC(BIN) Truncates binary fields based on the storage they occupy
On an average performance analysis -
TRUNC(OPT) > TRUNC(STD) > TRUNC(BIN)
TRUNC(STD) is the default.


Regards
PRM

Posted: Fri Apr 24, 2009 12:00 pm
by Natarajan
Thanks PRM, It is helpful information.

Posted: Thu Feb 04, 2010 5:16 am
by basujanm2310
COBOL is for common business oriented language.........

If you use SEARCH in COBOL, it is better to use SEARCH ALL ( Binary Search)


Thanks a lot..........

Thanks everybody for replying

Posted: Thu Feb 04, 2010 6:28 pm
by sachininx
Thanks everybody for replying :D

Posted: Sun Apr 18, 2010 11:55 pm
by chaat
I had a situation where we loaded a large VSAM file into memory to be searched by online CICS application. We encountered problems trying to emulate the VSAM START verb using COBOL SEARCH ALL verb when the key does not exist in the array. To get around this we had to write our own BINARY SEARCH in COBOL.

The array which we are searching contains about 240,000 entries.

We found that the hand coded BINARY SEARCH was significantly more efficient in terms of cpu usage than the COBOL "SEARCH ALL" verb.

The code was modeled after KNUTH's binary search, see this link

http://www.z390.org/contest/p21/P21DW1.TXT

the key to maximizing the performance of the code below was to use PIC S9(8) COMP for all the binary fields. This avoids the check for binary overflow in the generated code. Also we used the compiler option TRUNC(OPT) to avoid the code generated to check for truncation.

Note --> the IF statement after the PERFORM loop is to check for a "not found" condition. To emulate the VSAM START command we need to position the INDEX to the first row which is >= the search key.

Code: Select all

      * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
      *                                                               *
      *    NOTICE THAT BECAUSE WE DO SEARCHES FOR KEYS WHICH ARE      *
      *    NOT PRESENT IN THE CACHE, WE HAVE TO USE OUR OWN HAND      *
      *    CODED BINARY SEARCH. THE REASON FOR THIS IS THAT THE       *
      *    SEARCH ALL VERB DOES NOT SET THE INDEX AFTER A NOT         *
      *    FOUND CONDITION TO POINT TO THE LAST ROW IT CHECKED.       *
      *                                                               *
      * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

           MOVE +1                     TO BIN-LOW.
           MOVE BCNTR-TCA1-ENTRIES-USED
                                       TO BIN-HIGH.

           PERFORM WITH TEST AFTER
               UNTIL BIN-LOW > BIN-HIGH
               COMPUTE BIN-RANGE = BIN-HIGH - BIN-LOW
               COMPUTE BIN-MID = (BIN-RANGE / 2) + BIN-LOW
               SET TCA1-INDEX          TO BIN-MID
               EVALUATE TRUE
                   WHEN TCA1-KEY (TCA1-INDEX) = SRCH-TCA1-KEY
                       MOVE 1          TO BIN-RANGE
                       COMPUTE BIN-LOW = BIN-HIGH + 1
                   WHEN TCA1-KEY &#40;TCA1-INDEX&#41; < SRCH-TCA1-KEY
                       COMPUTE BIN-LOW  = BIN-MID + 1
                   WHEN OTHER
                       COMPUTE BIN-HIGH  = BIN-MID - 1
               END-EVALUATE
           END-PERFORM.


           IF TCA1-KEY &#40;TCA1-INDEX&#41; < SRCH-TCA1-KEY
               SET TCA1-INDEX          UP BY +1
           END-IF.

Posted: Mon Apr 19, 2010 5:54 pm
by Anuj Dhawan
Thanks for sharing Chuck, that's nice. I gotta try it out...:)

Posted: Mon Apr 19, 2010 7:20 pm
by Natarajan
Very good information shared. thanks Chuck.

Posted: Fri Mar 25, 2011 2:53 pm
by bharani
Hi all,

I have one doubt.

I can see lot of dead codes in one of the program. (Dead codes, am refering here is commented as well as unused paragraphs)

Will that affect the performance? by removing these codes is it possible to increase the performance or is it possible to save anything by doing so?

Please advice.

Thanks,
Barani

Posted: Fri Mar 25, 2011 11:26 pm
by dbzTHEdinosauer
performance viewed as the module: does not have any effect. (think you need to read and think alittle).

as for as the machine and other tasks,
if you have the optimizer set to 'not generate code that can never logically execute
then the module size is larger,
and that affects the performance of the machine in general
- more work to load a large program than a small program.

now if you are trying to use performance as a topic to have modules 'reasonably clean',
forget it.

If your shop has a good repository,
which would allow for versioning,
thus allowing one to go back a few versions and pick-up a routine or so
then there should be no argument to remove the garbage and make it easier for a programmer to deal with it.

and that is the only benifit.

(and having smaller source files for the compiler to deal with,
but that is nothing, with the speeds of todays computers)

Posted: Tue Apr 14, 2015 3:22 pm
by newwaysys
I agree to what Madhu had adviced over here with a little variation.
if you are using INPUT/OUTPUT procedure then FASTSRT compiler option will not increase the performance.In such cases it is better to use the DFSORT control statements like
http://mobilenumbertrackr.com/Mobile-Number-Tracer.aspx
INREC,OUTREC,SUM,SKIPREC,INCLUDE or OMIT ,STOPAFT etc and put them under the ddname SORTCNTL or IGZSRTCD.