Upload
others
View
2
Download
1
Embed Size (px)
Citation preview
© 2009 IBM Corporation
Demystifying HiperdispatchDemystifying Hiperdispatch
Hendrik De Smet
IT Architect System z Software���� [email protected]
GSE z/OS workgroup
17/06/2009
© 2009 IBM Corporation 2
TrademarksThe following are trademarks of the International Business Machines Corporation in the United States and/or other countries.
The following are trademarks or registered trademarks of other companies.
* Registered trademarks of IBM Corporation
* All other products may be trademarks or registered trademarks of their respective companies.
Java and all Java-related trademarks and logos are trademarks of Sun Microsystems, Inc., in the United States and other countries
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation.
Red Hat, the Red Hat "Shadow Man" logo, and all Red Hat-based trademarks and logos are trademarks or registered trademarks of Red Hat, Inc., in the United States and other countries.
SET and Secure Electronic Transaction are trademarks owned by SET Secure Electronic Transaction LLC.
Notes:Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.
This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area.
All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or anyother claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.
APPN*
CICS*
DB2*
DB2 Connect
DirMaint
e-business logo*
ECKD
Enterprise Storage Server*
ESCON*
FICON*
GDPS*
Geographically Dispersed Parallel Sysplex
HiperSockets
HyperSwap
IBM*
IBM eServer
IBM e(logo)server*
IBM logo*
IMS
Language Environment*
MQSeries*
Multiprise*
NetView*
On demand business logo
OS/390*
Parallel Sysplex*
PR/SM
Processor Resource/Systems Manager
RACF*
Resource Link
RMF
S/390*
Sysplex Timer*
System z9
TotalStorage*
Virtualization Engine
VM/ESA*
VSE/ESA
VTAM*
WebSphere*
z/Architecture
z/OS*
z/VM*
z/VSE
zSeries*
© 2009 IBM Corporation 3
AgendaAgenda
� Dispatching in z/OS and LPAR
– What is the problem and rationale for Hiperdispatch
� Hiperdispatch Function
– New Terminology for Processors
– What are Processor Shares
– How can this be observed thru RMF
– What does z/OS WLM do?
– Special Processors
– User interface
� Hiperdispatch WLM APARs
� Two examples how Hiperdispatch works together with other z/OS and PR/SM functions
– IRD and Group Capping
� Hiperdispatch & WLM policy
© 2009 IBM Corporation 4
Single C P U
Speed
� The industry is hitting fundamental physical limits:
– Size
– Speed of electromagnetic
propagation
– Heat transfer rates
� Large CPU speed increases are a thing of the past, across the industry
� Capacity increases will increasingly come from higher
n-way, more multithreading, and NUMA optimization
� Demand for lower latency will
drive co-location of hybrid transaction processing
elements
“In terms of size [of
transistor] you can see that
we're approaching the size of
atoms which is a fundamental
barrier, ….”
Gordon Moore, April 2005*
* Techworld, Operating Systems and Servers News, 13 April 2005
n-Way
C apacity
I/O Rate
Bandwidth
Next: Coping with physical limits
© 2009 IBM Corporation 5
Dispatching: z/OS and PR/SM
� Operating system dispatches work to next available logical CP
– Work usually has no affinity to any logical processor
� PR/SM dispatches logical CPs to physical CPs based on weights
– Typically multiple LCPs from different LPARs share the same physical CP
– PR/SM attempts to keep an LCP on a PCP but there is no guarantee for it
© 2009 IBM Corporation 6
Hiperdispatch: Motivation
� Compared to real world
620 kmHalle – Paris retour600+ cycles
210 kmHalle - Amsterdam200+ cycles
120 kmHalle – Antwerpen retour100+ cycles
4 kmHalle city center to highway4 cycles
1 kmDowntown Halle city1 cycle
� Cache and memory latency on a hypothetical server
© 2009 IBM Corporation 7
Hiperdispatch: Motivation …
� Design Objective
– Keep work as much as possible local to a physical processor to optimize the usage of the processor caches
– Expected Result
• Cache reloads shall occur much less often
• Cache misses and fetches from other books should be avoided as much as possible
� Function: Hiperdispatch
– Interaction between z/OS and PR/SM to optimize work unit and logical processor placement to physical processors
– Consists of 2 parts
• In z/OS (sometimes referred as Dispatcher Affinity)– Because it attempts to create a temporary affinity between work and processors
• In PR/SM (sometimes referred as Vertical CPU Management)– Because it attempts to assign physical processors exclusively to logical processors (as
much as possible)
© 2009 IBM Corporation 8
Hiperdispatch: PR/SM
� Optimize the number of logical processors to the minimum number needed of physical processors
� Based on the share of the logical partition
� Result
– Form N.M with
• N = number of physical processors which can be used completely by this partition
• M = the fraction of a physical processor which must be used to satisfy the share of the partition
_of_PPTotal_# i)Share(LPAR PP(LPARi)#
Rj)Weight(LPA
Ri)Weight(LPA i)Share(LPAR
n
1j
•=
=
∑=
© 2009 IBM Corporation 9
Hiperdispatch: PR/SM …
� Example
– Assignment of logical processors to physical processors in Hiperdispatch mode
– LPAR1
• 3 physical processors (High Processors) • Share of 50% of the 4th processor
(Medium Processor)
– LPAR2
• 1 physical processor • share of 50% of the 4th processor
� What about the “un-used” share of physical processors?
– 1.5 for LPAR1 and 3.5 for LPAR2
• Low Processors (parked = not used)
– If demand exists AND the other partition does not need its share
1. Medium processors can use up to all of their physical processors
2. Low processors can be un-parked and start to use physical processors which are not needed by other partitions
0
20
40
60
80
100
PP0 PP1 PP2 PP3 PP4
LPAR1 LPAR2
1.530%1505LPAR2
5500
3.570%3505LPAR1
Share in PPs
ShareWeightLPsPartition
1
2
© 2009 IBM Corporation 10
1 C P U A C T I V I T Y
PAGE 1
z/OS V1R8 SYSTEM ID SMPX DATE 02/02/2009
CONVERTED TO z/OS V1R10 RMF TIME 09.10.00
-CPU 2097 MODEL 729 H/W MODEL E64 SEQUENCE CODE 00000000000D56B2 HIPERDISPATCH=YES
0---CPU--- ---------------- TIME % ---------------- LOG PROC --I/O INTERRUPTS--
NUM TYPE ONLINE LPAR BUSY MVS BUSY PARKED SHARE % RATE % VIA TPI
0 CP 100.00 93.57 95.03 0.00 100.0 170.3 38.39
1 CP 100.00 96.07 97.05 0.00 100.0 153.9 36.05
2 CP 100.00 94.52 95.65 0.00 100.0 107.2 36.79
3 CP 100.00 94.26 95.34 0.00 100.0 82.47 36.90
4 CP 100.00 92.45 94.11 0.00 100.0 138.4 43.39
5 CP 100.00 95.39 96.63 0.00 100.0 132.3 39.30
6 CP 100.00 93.47 94.66 0.00 100.0 83.12 43.40
7 CP 100.00 93.55 94.82 0.00 100.0 71.77 44.75
8 CP 100.00 89.38 91.71 0.00 100.0 206.5 47.12
9 CP 100.00 94.33 96.00 0.00 100.0 189.2 44.20
A CP 100.00 90.93 92.47 0.00 100.0 128.2 44.77
B CP 100.00 90.57 92.32 0.00 100.0 117.3 46.36
C CP 100.00 91.09 92.70 0.00 100.0 17204 15.92
D CP 100.00 82.66 92.58 0.00 94.2 104.1 48.21
E CP 100.00 42.84 92.08 46.65 0.0 0.00 0.00
F CP 100.00 39.49 92.52 51.33 0.0 0.00 0.00
10 CP 100.00 60.68 92.74 26.50 0.0 0.00 0.00
11 CP 100.00 56.99 94.10 31.33 0.0 0.00 0.00
12 CP 100.00 0.02 ----- 100.00 0.0 0.00 0.00
13 CP 100.00 0.02 ----- 100.00 0.0 0.00 0.00
TOTAL/AVERAGE 74.62 94.15 1394 18889 18.28
Hiperdispatch: RMF Example for Processor Types
High Processors
Medium Processor
Low Processors• 4 were
partially un-parked
• 2 were always parked
Share of the Partition in Physical Processor (13.94)
© 2009 IBM Corporation 11
Hiperdispatch: Processor Share
� High Logical Processor
– Always 100% Share
– That means
• Always re-dispatched to its
physical processor whenever it
has demand
� Medium and Low Processors
– Divide the share of the medium processors between them
– That means
• The share decreases per processor when more low
processors become un-parked
© 2009 IBM Corporation 12
Hiperdispatch: PR/SM Part …
� Optimization for medium share processors
– If M is too small (M < 50%) the number of medium share processors for a partition is increased by 1 and the number of high share processors is reduced by 1
• This is done to avoid that the logical processor receives a too small
fraction of the physical processors
– Calculation
}2
M1 M 1;-N{N THEN 0.5 M IF
N.M _of_PPTotal_# i)Share(LPAR PP(LPARi)#
Rj)Weight(LPA
Ri)Weight(LPA i)Share(LPAR
NEW
n
1j
+==<
=•=
=
∑=
© 2009 IBM Corporation 13
Hiperdispatch: Processors and Utilizations
LPAR: R71 LPAR: R72
0
2
4
6
8
10
12
14
16
18
10
:56
:02
10
:56
:40
10
:57
:18
10
:57
:56
10
:58
:34
10
:59
:12
10
:59
:50
11
:00
:28
11
:01
:06
11
:01
:44
11
:02
:22
11
:03
:00
11
:03
:38
11
:04
:16
11
:04
:54
11
:05
:32
11
:06
:10
11
:06
:48
11
:07
:26
10
:56
:05
10
:56
:43
10
:57
:21
10
:57
:59
10
:58
:37
10
:59
:15
10
:59
:53
11
:00
:31
11
:01
:09
11
:01
:47
11
:02
:25
11
:03
:03
11
:03
:41
11
:04
:19
11
:04
:57
11
:05
:35
11
:06
:13
11
:06
:51
11
:07
:29
[#]
0
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
[%]
High Med UnPk Park MVSBusy CECUtil
Hiperdispatch: Example for Parking and Un-Parking
Weight = 333 � 5.32 LCPs� 4 High + 2 Medium
Partition has very high demand (red line)
Weight = 667 � 10.67 LCPs� 10 High + 1 Medium
Partition some demand (80-95% MVS Busy)
© 2009 IBM Corporation 14
z/OS V1R9 SYSTEM ID R71 DATE 01/28/2009 INTERVAL 00.59.753
CONVERTED TO z/OS V1R10 RMF TIME 11.02.00
-CPU 2097 MODEL 716 H/W MODEL E26 SEQUENCE CODE 00000000000A73A2 HIPERDISPATCH=YES
0---CPU--- ---------------- TIME % ---------------- LOG PROC --I/O INTERRUPTS--
NUM TYPE ONLINE LPAR BUSY MVS BUSY PARKED SHARE % RATE % VIA TPI
0 CP 100.00 99.50 100.0 0.00 100.0 29.40 0.00
1 CP 100.00 99.88 100.0 0.00 100.0 18.14 0.00
2 CP 100.00 99.83 100.0 0.00 100.0 31.71 0.00
3 CP 100.00 99.78 100.0 0.00 100.0 16.82 0.00
4 CP 100.00 72.24 100.0 0.00 66.4 0.00 0.00
5 CP 100.00 72.30 100.0 0.00 66.4 0.00 0.00
6 CP 100.00 35.16 100.0 46.14 0.0 0.00 0.00
7 CP 100.00 52.22 100.0 24.06 0.0 0.00 0.00
8 CP 100.00 0.00 ----- 100.00 0.0 0.00 0.00
9 CP 100.00 0.00 ----- 100.00 0.0 0.00 0.00
A CP 100.00 0.00 ----- 100.00 0.0 0.00 0.00
B CP 100.00 0.00 ----- 100.00 0.0 0.00 0.00
C CP 100.00 0.00 ----- 100.00 0.0 0.00 0.00
D CP 100.00 0.00 ----- 100.00 0.0 0.00 0.00
E CP 100.00 0.00 ----- 100.00 0.0 0.00 0.00
F CP 100.00 0.00 ----- 100.00 0.0 0.00 0.00
TOTAL/AVERAGE 39.43 100.0 532.8 96.08 0.00
Hiperdispatch: RMF Report Example
Medium LCPs
Low Un-parked LCPs
© 2009 IBM Corporation 15
Hiperdispatch: PR/SM – Annotations
� What if there is only 1 High or 1 Medium share processor?
– The high share processor will be converted to a medium share processor
• So there isn’t really just 1 High Share processor
– A low share processor is always un-parked
– The share of the medium processor is now equally divided between the two processors
• So it is ensured that the system does not starve because there is just one processor online
� If “low share” processors exist there is also ALWAYS at least one medium processor
– For example if the previous calculation would end with 2.0 meaning that 2 high processors exist and no medium AND in addition there is at least one low processor
• One high processor is converted to a medium processor– This is necessary to ensure that the low processors get some share of the shared
processor pool when they need to be un-parked
© 2009 IBM Corporation 16
Hiperdispatch: PR/SM – Annotations
� Special Processors
– Special processors (zAAPs and zIIPs) have their own processor pools
– PR/SM divides the special processors into the same structure of high,
medium and low share processors as it does with regular CPs
• The mechanism is the same
– PR/SM provides this information also to z/OS
� LPAR with dedicated processors:
– All processors are high share processors and nodes are created as for
shared logical processors
– Hiperdispatch is efficient too in this case:
• z/OS part – re-dispatch work on a node of physically closely related
processors
© 2009 IBM Corporation 17
Hiperdispatch: Special Processors
C P U A C T I V I T Y
z/OS V1R9 SYSTEM ID R71 DATE 02/15/2009
CONVERTED TO z/OS V1R10 RMF TIME 20.19.00
CPU 2097 MODEL 707 H/W MODEL E26 SEQUENCE CODE 0000000000019FC4 HIPERDISPATCH=YES
---CPU--- ---------------- TIME % ---------------- LOG PROC --I/O INTERRUPTS--
NUM TYPE ONLINE LPAR BUSY MVS BUSY PARKED SHARE % RATE % VIA TPI
0 CP 100.00 24.31 24.58 0.00 100.0 45.62 0.40
1 CP 100.00 26.19 26.48 0.00 100.0 59.65 0.73
2 CP 100.00 23.81 24.08 0.00 100.0 36.48 0.91
3 CP 100.00 21.25 21.87 0.00 50.0 0.83 8.00
4 CP 100.00 0.00 ----- 100.00 0.0 0.00 0.00
5 CP 100.00 0.00 ----- 100.00 0.0 0.00 0.00
6 CP 100.00 0.00 ----- 100.00 0.0 0.00 0.00
TOTAL/AVERAGE 13.65 24.25 350.0 142.6 0.71
7 IIP 100.00 5.75 5.75 0.00 50.0
8 IIP 100.00 5.57 5.56 0.00 0.0
9 IIP 100.00 2.27 ----- 100.00 0.0
A IIP 100.00 0.00 ----- 100.00 0.0
B IIP 100.00 0.00 ----- 100.00 0.0
TOTAL/AVERAGE 2.72 6.69 50.0
High
Medium
Low
Low
Medium
© 2009 IBM Corporation 18
Hiperdispatch: Special Processors …
C P U A C T I V I T Y
z/OS V1R9 SYSTEM ID R71 DATE 02/15/2009
CONVERTED TO z/OS V1R10 RMF TIME 20.31.00
CPU 2097 MODEL 707 H/W MODEL E26 SEQUENCE CODE 0000000000019FC4 HIPERDISPATCH=YES
---CPU--- ---------------- TIME % ---------------- LOG PROC --I/O INTERRUPTS--
NUM TYPE ONLINE LPAR BUSY MVS BUSY PARKED SHARE % RATE % VIA TPI
0 CP 100.00 90.49 95.67 0.00 100.0 23.49 0.07
1 CP 100.00 90.03 94.55 0.00 100.0 55.24 0.00
2 CP 100.00 91.94 95.99 0.00 100.0 21.97 0.08
3 CP 100.00 69.13 96.10 0.00 50.0 0.07 0.00
4 CP 100.00 53.55 78.77 0.00 0.0 0.00 0.00
5 CP 100.00 51.81 79.16 3.80 0.0 0.00 0.00
6 CP 100.00 0.00 ----- 100.00 0.0 0.00 0.00
TOTAL/AVERAGE 63.85 90.11 350.0 100.8 0.03
7 IIP 100.00 51.86 99.96 0.00 50.0
8 IIP 100.00 51.52 99.95 0.00 0.0
9 IIP 100.00 51.40 99.94 0.00 0.0
A IIP 100.00 51.51 99.94 0.00 0.0
B IIP 100.00 51.45 99.94 0.00 0.0
TOTAL/AVERAGE 51.55 99.95 50.0
High
Medium
Un-parked
Un-parked
Medium
Parked
© 2009 IBM Corporation 19
Hiperdispatch: z/OS WLM
� z/OS WLM
– Every 2s
– Tests Hiperdispatch ON and OFF switch
– Reads logical processor topology from PR/SM
– Builds affinity nodes
– Parks and un-parks low LCPsbased on processor demand
– Balances units of work to affinity nodes
� z/OS Dispatcher
– Dispatches work on affinity nodes
– Determines whether nodes need help
© 2009 IBM Corporation 20
Hiperdispatch: PR/SM and z/OS WLM
� z/OS turns Hiperdispatch ON and OFF
– Issues Perform Topology Function (PTF) based on HIPERDISPATCH setting
� z/OS obtains the logical to physical processor mapping in Hiperdispatch mode
– Whether a logical processor has high, medium or low share
– On which book the logical processor is located
� z/OS tells PR/SM which low processors should be parked or un-parked
© 2009 IBM Corporation 21
Unpark or Park Vertical Low Processors
� UNPARK:
– CP: MVS busy > 95%
– zIIP and zAAP > 80%
– Reasonable quantity of unused capacity
� Park if any of the following occur:
– CP: MVS busy < 85%
– zIIP and zAAP < 66%
– Sum of VM + VL LP consumption is < 110% of the quaranteedVM share
– Average effectiveness of the VM/LP LP is less than 20% physical consumption
© 2009 IBM Corporation 22
HiperDispatch Details:
The actions of HiperDispatch viewed here.
© 2009 IBM Corporation 23
Hiperdispatch: z/OS
� Dispatcher Nodes
– Nodes are created based on the high-share processors
• Ideally a node has 4 high share processors
• An additional node is created when at least 3 high share processors can be placed in it
• Ideally a node encompasses only high share processors of the same book
– Medium and low share processors are added to the created nodes based on their book placement to keep a node as much as possible on one book
© 2009 IBM Corporation 24
Hiperdispatch: Work Balancing
� WLM balances work across nodes
– Each unit of work gets a home node assigned for CPs, zAAPsand zIIPs
• Based on dispatch priority and consumed capacity
� Each node gets a helper list assigned
– Required if a node has too much work
• Work unit queue is too long
• CPs, zAAPs and zIIPs do not enter wait within a specified time interval
– Regular CPs can help any node
© 2009 IBM Corporation 25
Hiperdispatch: User Interface
�Parameter IEAOPTxx HIPERDISPATCH=YES/NO
– HIPERDISPATCH=YES
• Specifies that SRM should switch to Hiperdispatch mode.
– HIPERDISPATCH=NO
• Specifies that SRM should not switch to Hiperdispatch mode.
– Default Value: NO
– Notice:
• This parameter is valid when OA20418 has been installed.
• HIPERDISPATCH=YES adjusts CCCAWMT, ZAAPAWMT and ZIIPAWMT to a value range between 1600 and 3200.
• HIPERDISPATCH=YES forces VARYCPU to No.
© 2009 IBM Corporation 26
Hiperdispatch: User Interface …
. . . . . . . . . . . . . . . . . . . . . . . . . .
Command ===> Scroll ===> PAGE
WLM OPT Settings >SAVE<
System: AQFT Version: z/OS 010900 OPT: FT Time: not issued
OPT-Parameter: Value: Description:
ABNORMALTERM No Abnormal term. used in routing rec.
BLWLTRPCT 5 CPU cap. to promote blocked work
BLWLINTHD 20 Time blocked work waits for help
CCCAWMT 3200 Alternate wait management time value
ZAAPAWMT 3200 AWM time value for zAAPs
ZIIPAWMT 3200 AWM time value for zIIPs
…
HIPERDISPATCH Yes,Yes Hiperdispatch value(inOPT, Running)
IFAHONORPRIORITY Yes Specifies if CPs may help zAAPs
IIPHONORPRIORITY Yes Specifies if CPs may help zIIPs
…
VARYCPU No VARYCPU is enabled
VARYCPUMIN 1 VARYCPUMIN value
WASROUTINGLEVEL 0 WebSphere Routing Level
WLMOPT Tool
Shows status of OPT parameters
Can be downloaded from WLM Tools page
There are two values shown on the panel. The first value reflects the OPT setting (inOPT) the second whether HIPERDISPATCH is really used in the system
(Running)
© 2009 IBM Corporation 27
Hiperdispatch: WLM APARs
SPEApril 2009HD SMF restructureOA27797
SPE, Support of OA27855 Planned Maylock promote timeOA27810
WorkaroundPlanned MayIRA863E issued during IPL time because PR/SM doesn’t responseOA27869
WorkaroundPlanned MayIRA863E issued while WLM rebuilds node and a CP goes offline ���� HD disabledOA28068
January 2009IWMWSYSQ too high waittimeOA27032
Available!!April 2009Un-parking lows for low weight partitions difficultOA26789
December 2008Message IRA863 issued every 2s in case of errorOA26540
June 20080C7 because an expected abend was not correctly suppressedOA24297
December 2008CR 3 not set correctly in HD=ON modeOA26382
October 20080C9 when all processors of a certain type are dedicatedOA26387
November 2008VARYCPU not restored correctly in error case and turned off on z9OA26225
D-TypeOctober 2008Time slice modification in HD modeOA26272
September 2008Time slice value too high for kneecapped processors (very small processors)OA26251
August 2008Hi CPs are never disabled for I/O interruptsOA25934
Support of OA25825August 2008AWUQ fields not set correctly in HD=NO modeOA25841
August 2008Incorrect free capacity calculation (ENQDP is incorrect)OA25731
SPEZ10 IntroductionIntroduction of Hiperdispatch for z/OS R9, R8, R7.1 with z10OA20418
SPEAugust 2008Removes message IRA862IAssures that at least 2 logical processors are un-parked
OA24272
June 2008counting dedicated processors can lead to incorrect node buildOA24575
03/14/2008Floating point problemOA24322
RemarkClose DateDescriptionAPAR
© 2009 IBM Corporation 28
Hiperdispatch with Other Functions
� Hiperdispatch, IRD, OA26789
– Many partitions on one CEC
– Effect of OA26789 for small partitions
– Hiperdispatch and IRD Weight Management
– Taking additional processors online
© 2009 IBM Corporation 29
Example 1: Usage of Processor Share
Relative LPAR Weighting
0.00%
20.00%
40.00%
60.00%
80.00%
100.00%
120.00%
140.00%
160.00%
180.00%
200.00%
220.00%
240.00%
260.00%
280.00%
300.00%
320.00%
340.00%
360.00%
380.00%
400.00%
420.00%
440.00%
460.00%
480.00%
500.00%
520.00%
540.00%
560.00%
580.00%
600.00%
07.45.00 08.00.00 08.15.00 08.30.00 08.45.00 09.00.00 09.15.00 09.30.00 09.45.00 10.00.01 10.15.00 10.30.00 10.45.00 11.00.00 11.15.00 11.30.00 11.45.00 12.00.01
Rela
tive L
PA
R W
eig
hti
ng
[%
]
IRD1 MNT1 MNT2 SYS1 SYS2 SYS3 SYS4 SYS5 SYS6 DEV1 DEV2 DEV3 TST1 TST2 TST3 IRD2 SYS7 SYS8 SYS9 SYSA
SYSB ICF1 ZVM1 ZVM2 ZVM3
� IRD1 and IRD2 have only a relative small share of the total CEC
� But they can use much more because other partitions do not use their shares
� IRD1 up to 560%
� IRD2 up to 260%
© 2009 IBM Corporation 30
Example 1: (Un)-Park Statistic for Partition IRD1
Park/Unpark Statistic for System IRD1
Based on RMF CPU Activity Report
100 = 1 Processor
200 200 200 200 200 200 200 202.38
300 300 300 300 300 300 300 300.36
492.36 500
0
100
200
300
400
500
600
07:4
5:00
08:0
0:00
08:1
5:00
08:3
0:00
08:4
5:00
09:0
0:00
09:1
5:00
09:3
0:00
09:4
5:00
10:0
0:01
10:1
5:00
10:3
0:00
10:4
5:00
11:0
0:00
11:1
5:00
11:3
0:00
11:4
5:00
12:0
0:01
0
100
200
300
400
500
600
High Med Unparked Parked Online
� Park/Unpark Statistic for System IRD1 (based on RMF CPU Activity Report)
� Test Scenario started with only 2 online processors and over time 3 additional logical processors were brought online
� Because of the high demand the additional processors become
unparked most of the time
� This must be noted because this was the problem of OA26789
� Scaling on Y-Axis: 100 = 1 processor for the complete RMF interval
© 2009 IBM Corporation 31
Hiperdispatch & WLM policy
� Processing Benefits:
– Reduced multi-processor effects
– Improved hardware cache re-use and locality of reference characteristics
� Therefore, magnitude of potential improvement is related to:
– Number of CP
– Size of z/OS images in the configuration
– Logical/physical processor ratio
– Memory reference pattern or storage hierarchy characteristics of the workload
– Exploitation of IRD vary CPU management
© 2009 IBM Corporation 32
Processing Benefits
� The range of benefit is expected to be from 0% to 10%
� Rule of thumb (and we all now it depends):
– 1-2% for a 1 book environment - less than 12 purchased CPs/zIIPs/zAAPs
– 2-4% for a 2 book environment - less than 26 purchased CPs/zIIPs/zAAPs
– 4-7% for a 3 book environment - less than 40 purchased CPs/zIIPs/zAAPs
– 7-10% for a 4 book environment - less than 64 purchased CPs/zIIPs/zAAPs
© 2009 IBM Corporation 33
WLM Policy Considerations
� In HD mode we have reduced access to LP (queuing theory!)
� Proper WLM goals and importance more important
� Monitor goals being met in HD mode and review service policy definitions
• Especially if the number of high and/or medium processors is smaller than 3
– Refer to Redbook WLM SG24-6472 Chapter 6,»Impact of the number of engines on velocity«
� SYSSTC SRB work still retains the capability to execute on any available LP
© 2009 IBM Corporation 34
Hiperdispatch: Summary
� Hiperdispatch is a combination of PR/SM and z/OS to provide more efficient dispatching on large scale processor environments
– PR/SM provides a much better mapping of logical to physical
processors
– z/OS re-dispatches work on a subset of the logical processors
� Hiperdispatch is most efficient for systems with many logical processors
– Provides the base to grow with many processors on System z
© 2009 IBM Corporation 35
Thank You
MerciGrazie
Gracias
Obrigado
Danke
Japanese
English
French
Russian
German
Italian
Spanish
Brazilian Portuguese
Arabic
Traditional Chinese
Simplified Chinese
Hindi
Tamil
Thai
Korean
End of PresentationEnd of Presentation
BedanktDutch