Many thanks for visiting my Blog..!!Please share this blog using below share buttons and leave your Comments/Feedback/Appreciations on Tab: Feedback
Share This Blog..!!

Datastage Engine Performance Tuning :UVCONFIG

Improve the performance of Datastage Engine.Tunable parameters in the UVCONFIG file :DK®

The UVCONFIG file contains several parameters that you can configure to improve performance and avoid troubleshooting.DK®

The most commonly used parameters in the UVCONFIG file are the following parameters:
MFILES:DK®
This parameter defines the size of the server engine (DSEngine) rotating file pool. This is a per process pool for files such as sequential files that are opened by the DataStage server runtime. It does not include files opened directly at OS level by the parallel engine (PXEngine running osh).
The server engine will logically open and close files at the DataStage application level and physically close them at the OS level when the need arises.
Increase this value if DataStage jobs use a lot of files. Generally, a value of around 250 is suitable. If the value is set too low, then performance issues may occur, as the server engine will make more calls to open and close at the physical OS level in order to map the logical pool to the physical pool.
Note: The OS parameter of nofiles must be set higher than MFILES. Ideally, it would be recommended that nofiles be at least 512. This will allow the DataStage process to open up to 512 - (MFILES + 8 ) files.
On most UNIX systems, the proc file system can be used to monitor the file handles opened by a given process; for example:
ps -ef|grep dsrpcd

root     23978     1  0 Jul08 ?        00:00:00 /opt/ds753/Ascential/DataStage/DSEngine/bin/accdsrpcd

ls -l /proc/23978/fd

lrwx------  1 root dstage 64 Sep 25 08:24 0 -> /dev/pts/1 (deleted)
l-wx------  1 root dstage 64 Sep 25 08:24 1 -> /dev/null
l-wx------  1 root dstage 64 Sep 25 08:24 2 -> /dev/null
lrwx------  1 root dstage 64 Sep 25 08:24 3 -> socket:[12928306]
The dsrpcd process (23978) has four files open.
T30FILE:DK®
This parameter determines the maximum number of dynamic hash files that can be opened system-wide on the DataStage system. If this value is too low, expect to find an error message similar to 'T30FILE table full'.
The following engine command, executed from $DSHOME, shows the number of dynamic files in use:
echo "`bin/smat -d|wc -l` - 3"|bc
Use this command to assist with tuning the T30FILE parameter. 
Every running DataStage job requires at least 3 slots in this table. (RT_CONFIG, RT_LOG, RT_STATUS). Note, however, that multi-instance jobs share slots for these files, because although each job run instance creates a separate file handle, this just increments a usage counter in the table if the file is already open to another instance.
Note that on AIX the T30FILE value should not be set higher than the system setting ulimit -n.

GLTABSZ
This parameter defines the size of a row in the group lock table. Tune this value if the number of group locks in a given slot is getting close to the value defined.
Use the LIST.READU EVERY command from the server engine shell to assist with monitoring this value. LIST.READU lists the active file and record locks; the EVERY keyword lists the active group locks in addition.
For example, with a Designer client and a Director client both logged in to a project named “dstage0”:
Active Group Locks:                                  Record Group Group Group
Device.... Inode..... Netnode Userno Lmode G-Address. Locks ...RD ...SH ...EX
838222719  2039334646       0   5620 62 IN        800     1     0     0     0

Active Record Locks:
Device.... Inode..... Netnode Userno Lmode   PID Item-ID.....................
 838222719 2039334646       0  64332 62 RL  1204 dstage0&!DS.ADMIN!&
 838222719 2039334646       0  62412 62 RL  3124 dstage0&!DS.ADMIN!&
Device
A number that identifies the logical partition of the disk where the file system is located
Inode
A number that identifies the file that is being accessed
Netnode
A number that identifies the host from which the lock originated. 0 indicates a lock on the local machine, which will usually be the case for DataStage. If other than 0, then on Unix it is the last part of the TCP/IP host number specified in the /etc/hosts file; on Windows it is either the last part of the TCP/IP host number or the LAN Manager node name, depending on the network transport used by the connection.

Userno :The phantom process that set the lock.DK®
Pid : A number that identifies the controlling process.DK®
Item-ID :The record ID of the locked record.DK®
Lmode The number assigned to the lock, and a code that describes its use
G-Address Logical disk address of group, or its offset in bytes from the start of the file, in hex
Record Locks : The number of locked records in the group.DK®
Group RD Number of readers in the group.
Group SH : Number of shared group locks.
Group EX : Number of exclusive group locks. When the report describes record locks, it contains the following Lmode codes:
FS, IX, CR : Shared file locks.
FX, XU, XR : Exclusive file locks. When the report describes group locks, it contains the following Lmode codes:
EX : Exclusive lock.
SH : Shared lock.
RD : Read lock.
WR : Write lock.
IN : System information lock. When the report describes record locks, it contains the following Lmode codes:
RL : Shared record lock.
RU : Update record lock.
RLTABSZ:
This parameter defines the size of a row in the record lock table. From a DataStage job point of view, this value affects the number of concurrent DataStage jobs that can be executed, and the number of DataStage Clients that can connect.
Use the LIST.READU command from the DSEngine shell to monitor the number of record locks in a given slot. With one Director client logged in to a project named “dstage0”, and two instances of a job in that project that are running, the active record locks are similar to the following example:
Active Record Locks:
Device.... Inode..... Netnode Userno Lmode   Pid Item-ID.............
 838222719 2039334646       0  64332 62 RL  1204 dstage0&!DS.ADMIN!&
 838222719 2039334646       0  62128 62 RL  3408 dstage0&!DS.ADMIN!&
 838222719 2039334646       0  65252 62 RL   284 dstage0&!DS.ADMIN!&
 304877956  328255620       0  62128 62 RL  3408 RT_CONFIG456
 304877956  328255620       0  65252 62 RL   284 RT_CONFIG456
In the above report, Item-ID=RT_CONFIG456 identifies that the running job is an instance of job number 456, whose compiled job file is locked while the instance is running so that, for example, it cannot be re-compiled in that time. A job’s number within its project can be seen in the Director job status view, the detail dialog, for a particular job.
The unnamed column in-between UserNo and Lmode relates to a row number within the Record Lock table. Each row can hold RLTABSZ locks. In the above example, 3 slots out of 75 (Default value for RLTABSZ) have been used for row 62. When the number of entries for a given row gets close to the RLTABSZ value, it is time to consider re-tuning the system.
Jobs can fail to start, or generate -14 errors, if RLTABSZ is being reached.
DataStage Clients may see an error message similar to 'DataStage Project locked by Administrator' when attempting to connect. Note that the error message can be misleading - it means in this case that a lock cannot be acquired because the lock table is full, and not because another user already has the lock.
MAXRLOCK:DK®
This parameter must always be set to the value of RLTABSZ – 1.
Each DSD.RUN process takes a record lock on a key name &!DS.ADMIN!& of the UV.ACCOUNT file in $DSHOME (as seen in the examples above). Each DataStage client connection (for example, Designer, Director, Administrator, dsjob command) takes this record lock as well. This is the mechanism by which DataStage determines whether operations such as project deletion are safe, operations cannot proceed while a project lock is held by any process.
MAXRLOCK needs to be set to accommodate the maximum # of jobs and sequences plus client connections that will be used at any given time. And RLTABSZ needs to be set to MAXRLOCK + 1. Keep in mind that changing RLTABSZ greatly increases the amount of memory needed by the disk shared memory segment.
Customer Support has reported in the past that using settings of 130/130/129 (for RLTABSZ/GLTABSZ/MAXRLOCK, respectively) work successfully on most customer installations. There have been reports of high-end customers using settings of 300/300/299, so this is environment specific.
If sequencers or multi-instance jobs are used, start with the recommended settings of 130/130/129, and increase to 300/300/299 if necessary.
Prior to DataStage v8.5 the following settings were pre-defined:
  • MFILES = 150
  • T30FILE = 200
  • GLTABSZ = 75
  • RLTABSZ = 75
  • MAXRLOCK = 74 (75-1)
DataStage v8.5 and upper versions has the following settings pre-defined:
  • MFILES = 150
  • T30FILE = 512
  • GLTABSZ = 75
  • RLTABSZ = 150
  • MAXRLOCK = 149 (150-1)
These are the lowest suggested values to accommodate all system configurations, so tuning of these values is often necessary.
DMEMOFF, PMEMOFF, CMEMOFF, NMEMOFF:DK®
These are the shared memory address offset values for each of the four DataStage shared memory segments (Disk, Printer, Catalog, NLS). Depending upon the platform, PMEMOFF, CMEMOFF & NMEMOFF will need to be increased to allow for a large disk shared memory to be used.
Where these values are set to 0x0 (on AIX for example), the OS takes care of managing these offsets. Otherwise, the PMEMOFF - DMEMOFF = largest disk shared memory segment size. Additionally, on Solaris for example, these values will be increased to allow for a greater heap size for the running DataStage job.
Note that when running the shmtest utility, great care must be taken with interpreting its output. The utility tests the availability of memory that it can allocate at the time it runs, and this will be affected both by the current uvconfig settings, how much shared memory is already in use, and other activity on the machine at the time.

No comments:

Post a Comment

disqus