42
IBM eServer p5 and pSeries © 2005 IBM Corporation IBM eServer p5 Systems: Performance Monitoring John Sheehy [email protected]

eServer p5 Systems: Performance Monitoring

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2005 IBM Corporation

IBM eServer p5 Systems:Performance Monitoring

John [email protected]

Page 2: eServer p5 Systems: Performance Monitoring

2

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Agenda

1) Overview

2) Power4 & DLPAR - Hardware - Granularity - DLPAR actions

3) Power5 & Micro-partitions - Hardware - Granularity - Micro-partition configs - Scheduling Process

4) Tools for Performance Monitoring (Snapshot) - vmstat - iostat - lparstat - topas - nmon

Page 3: eServer p5 Systems: Performance Monitoring

3

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Agenda (cont.)

5) Accounting Tools (Trend)

6) Performance Tuning tools - vmo - ioo - no - nfso

7) General Performance Recommendations for DBs - 64bit kernel - 64bit binary - CIO - vmo tuning - no tuning - Disk Layout

8) Questions/Wrap-up

Page 4: eServer p5 Systems: Performance Monitoring

4

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Processors

Page 5: eServer p5 Systems: Performance Monitoring

5

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

IBM POWER technology is everywhere

Servers, Workstations, PCs, Gaming Consoles, Embedded

POWER2™ POWER5+™POWER6™

POWER4+™POWER4™POWER3™

Planned*

PowerPC®

PowerPC

PowerPC

PowerPC

603e 750 750FX 970

401 405GP 440GP

Mid-range

Embeddednew

products

*All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

POWER5

Page 6: eServer p5 Systems: Performance Monitoring

6

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Autonomic Computing Enhancements

•Simultaneous multithreading •Micro-Partitioning•Selective dynamic firmware updates (2Q05)•Enhanced scalability•High throughput performance•Enhanced cache/memory subsystem•Up to 254 LPARs

2004POWER5

130 nm

Shared L2

Up to

1.9 GHzCore

Up to 1.9 GHz Core

Distributed Switch

2005-6POWER5+

90 nm

Shared L2

>> GHz Core

>> GHz Core

Distributed Switch

2006-7POWER6

65 nm

L2 caches

Ultra-high frequency cores

AdvancedSystem Features

2001 POWER4

•Chip multiprocessing - Distributed switch - Shared L2•Dynamic LPARs (16)

Distributed Switch

Shared L2

1+ GHzCore

180 nm

1+ GHzCore

2002-4POWER4+

•Reduced size•Lower power•Larger L2•More LPARs (32)

Shared L2

1.5+ GHz Core

1.5+ GHz Core

130 nm

Distributed Switch

Planned * Planned *

IBM POWER technology roadmap for pSeries

*All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Page 7: eServer p5 Systems: Performance Monitoring

7

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

POWER5 architecture

Simultaneous multithreading

Hardware support for Micro-Partitioning–Sub-processor allocation

Enhanced distributed switch

Enhanced memory subsystem–Larger L3 cache: 36MB –Memory controller on-chip

Improved High Performance Computing [HPC]

Dynamic power saving–Clock gating

GX+

Chip-Chip

MCM-MCM

SMPLink

Mem

ory

L3

1.9 MB L2 Cache

L3 Dir / Ctl

Mem Ctl

POWER5

Core

POWER5

Core

Enhanced distributed switch

POWER5 design

POWER5 enhancements

1.5, 1.65 and 1.9 GHz 276M transistors .13 micron

Page 8: eServer p5 Systems: Performance Monitoring

8

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

P P

Mem Ctl

L2

P P

Mem Ctl

L2

Fab Ctl

L3 L3L3Ctl

L3Ctl

POWER5

Memory Memory

Fab Ctl

Modifications to IBM POWER4 to create POWER5

POWER4

P P

Mem Ctl

Fab Ctl

L3

L3 Cntrl

L2

P P

Mem Ctl

L3

L3 Cntrl

L2

Fab Ctl

Memory Memory

Larger L2 and L3 Reduced latencies

Faster memory access

Page 9: eServer p5 Systems: Performance Monitoring

9

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Better performance12072Floating-point registers

389mm2

Enhanced dist. switchProcessor speed½ proc. speed

1/10th of processor

Yes

36MB12-way associativeReduced latency

10-way associative1.9MB

4-way associative

POWER5 design

Better usage of processor resources

1 processorPartitioning support

50% more transistors inthe same space412mm2Size

Better systems throughputBetter performance

Distributed switch½ proc. speed½ proc. speed

Chip interconnect: Type Intra MCM data bus Inter MCM data bus

Better processor utilization30%* system improvement

NoSimultaneous multithreading

Better cache performance32MB

8-way associative118 clock cycles

L3 cache

Fewer L2 cache missesBetter performance

8-way associative1.5MBL2 cache

Improved L1 cacheperformance

2-way associativeL1 cache

BenefitPOWER4+ design

POWER4+ to POWER5 comparison

* Based on IBM rPerf projections

Page 10: eServer p5 Systems: Performance Monitoring

10

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

~ p5: Simultaneous multithreading

POWER4 (Single Threaded)

CRL

FX0

FX1

LSO

LS1

FP0

FP1

BRZ

Thread1 active

Thread0 activeNo thread active

Utilizes unused execution unit cycles

Presents symmetric multiprocessing (SMP) programming model to software

Natural fit with superscalar out-of-order execution core

Dispatch two threads per processor: “It’s like doubling the number of processors.”

Net result: – Better performance– Better processor utilization

Appears as 4 CPUs per chip to the operating

system (AIX 5L V5.3 and

Linux)

Syst

em t

hro

ugh

pu

t

SMTST

POWER5 (simultaneous multithreading)

Page 11: eServer p5 Systems: Performance Monitoring

11

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Logical Partition Processor Terminology

Page 12: eServer p5 Systems: Performance Monitoring

12

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Capped Shared Processor LPAR

Page 13: eServer p5 Systems: Performance Monitoring

13

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Uncapped Shared Processor LPAR

Page 14: eServer p5 Systems: Performance Monitoring

14

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Capacity & Virtual CPU Relationship

Page 15: eServer p5 Systems: Performance Monitoring

15

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Virtualization

Page 16: eServer p5 Systems: Performance Monitoring

16

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Networks and network storage

Unassigned on demandresources

Hypervisor™

i5/OS*

Serviceprocessor

Processors

Memory

Linuxpartitions

HardwareManagement

Console(HMC)

~ p5 systems virtualization architecture

Virtual Network

AIX 5L V5.2

Expansion slots

AIX 5L V5.3 partitions

Virtual processors Virtual adapters

VirtualI/O

server

Linuxkernels

AIX 5Lkernels SLIC

Virtualnetwork

& storage

Local devices & storage

Workload management and provisioning

*Available on 1.65 GHz p5-570, p5-590 and p5-595 models

Page 17: eServer p5 Systems: Performance Monitoring

17

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

~ p5 Advanced POWER Virtualization optionVirtual I/O Server

– Shared Ethernet – Shared SCSI and

Fibre Channel-attached disk subsystems

– Supports AIX 5L V5.3 and Linux* partitions

Micro-Partitioning– Share processors across

multiple partitions– Minimum partition 1/10th

processor– AIX 5L V5.3, Linux*, or i5/OS**

Partition Load Manager– Both AIX 5L V5.2 and

AIX 5L V5.3 supported– Balances processor and

memory request

Managed via HMC* SLES 9 or RHEL AS 3

AIX 5LV5.2Linux

Hypervisor

Dynamically resizable

2 CPUs

4CPUs

6 CPUs

Lin

ux

Lin

ux

AIX

5L

V5.

3

Virtual I/O paths

AIX

5L

V 5

.3

AIX

5L

V5.

3

AIX

5L

V5.

3

AIX

5L

V5.

3

Micro-Partitioning

ManagerServer

LPAR 2AIX 5L V5.3

LPAR 1AIX 5L V5.2

LPAR 3Linux

PLM partitions Unmanaged partitions

Hypervisor

PLM agent PLM agent

AIX 5LV5.3

6CPUs

Ethernetsharing

Virtual I/O server

partition

Storagesharing

1 CPU

i5/OSV5R3**

1CPU

**Available on 1.65 GHz p5-570, p5-590 and p5-595 models

Page 18: eServer p5 Systems: Performance Monitoring

18

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Micro-Partitioning

Increased number of LPARs – Micro-partitions: Up to 254*– Dynamic LPARs: Up to 32*

Configured via the HMC

Number of logical processors– Minimum/maximum

Entitled capacity– In units of 1/100 of a CPU– Minimum 1/10 of a CPU

Variable weight– % share (priority) of

surplus capacity

Capped or uncapped partitions

Micro-partitions

Pool of 6 CPUs

Lin

ux

i5/O

S V

5R3*

*

AIX

5L

V5.

3

AIX

5L

V5.

3

Lin

ux

Entitledcapacity

Hypervisor

Min

Max*on p5-590 and p5-595 with a minimum of 26 active processors

AIX

5L

V5.

2

AIX

5L

V5.

3

DynamicLPARs

WholeProcessors

**Available on 1.65 GHz p5-570, p5-590 and p5-595 models

Page 19: eServer p5 Systems: Performance Monitoring

19

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

~ p5 hardware

Page 20: eServer p5 Systems: Performance Monitoring

20

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

~ p5 570: “Pay as you grow” modular architecture

SLES 9 RHEL AS 3

Primary + 3 additionalModules

Up to 6 (3+3) disk drive bays

Up to 8 RIO-2 drawers Redundant cooling

and power

AIX 5L V5.2 and AIX 5L V5.3 i5/OS™* (one processor)

Software support

Up to 512GB memory 6 PCI-X slots Service Processor Dual 10/100/1000 USB: 2 HMC: 2 (max per system)

Features/ primary module

Functions supported/

base system

2-way, 4-way, 8-way, 12-way and 16-way systems Processor speeds: 1.65 GHz and 1.9 GHz

New POWER5 mid-range

system

Dynamic LPAR IBM Advanced POWER Virtualization option

–Micro-Partitioning support (1/10th processor granularity)–Virtual networking and storage support–Partition Load Manager

CoD options

* On 1.65 GHz POWER5 model

Page 21: eServer p5 Systems: Performance Monitoring

21

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

9999

~ p5 570 modular architecture

Cabling

Primary moduleSerial numberSystem clock

Service processorAdditional modules

Feature codeNo serial number

Up to three additional

}}

Cables for each configuration– 8-way, 12-way and 16-way– Contains processor fabric bus– Installed on front of drawer

FSP cable at rear for service processor, clock signals, etc.– Similar to SMP flex cable

Page 22: eServer p5 Systems: Performance Monitoring

22

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

~ p5 570 packaging options

9999

8-way 12-way 16-way

Processors: 1.5, 1.65 or 1.9 GHz Memory: 8 to 256GB Adapters: 12 to 95 PCI-X slots Integrated: 10/100/1000 (4) Storage: 46.8TB RIO drawers: 12 LPARs: up to 10 per processor*

Processors: 1.65 or 1.9 GHz Memory: 12 to 384GB Adapters: 18 to 129 PCI-X slots Integrated: 10/100/1000 (6) RIO drawers: 16 Storage: 63.0TB LPARs: up to 10 per processor*

Processors: 1.65 or 1.9 GHz Memory: 16 to 512GB Adapters: 24 to 163 PCI-X slots Integrated: 10/100/1000 (8) RIO drawers: 20 Storage: 79.2TB LPARs: up to 10 per

processor**Requires Advanced POWER Virtualization option

Page 23: eServer p5 Systems: Performance Monitoring

23

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Reliability, Availability andServiceability

Page 24: eServer p5 Systems: Performance Monitoring

24

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

First Failure Data Capture DDR Chipkill™ memory Bit-steering/redundant memory Memory soft scrubbing Redundant power, fans Dynamic processor

deallocation ECC memory Persistent memory deallocation Hot-plug PCI slots, fans, power Internal light path diagnostics Hot-swappable disk bays

POWER5 RAS improvements

Selected concurrent firmware update (2Q05)

I/O error handling extended beyond base PCI adapter

ECC has been extended to inter-chip connections for the fabric/processor buses (data, address, control)

Partial L2 cache de-allocation

L3 cache line deletes improved from 2 to 10 for better self-healing capability

SOD Service Processor Failover (2H05)

POWER4 delivered major enhancements …while at the same time enhancing

basic availability

POWER5 is designed to significantly reduce scheduled HW

outages …

Page 25: eServer p5 Systems: Performance Monitoring

25

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Software

Page 26: eServer p5 Systems: Performance Monitoring

26

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

vmstat

Pre-AIX v5.3

$ vmstat 1kthr memory page faults cpu----- ----------- ------------------------ ------------ ----------- r b avm fre re pi po fr sr cy in sy cs us sy id wa 1 1 224551 16218 0 0 1 206 105 0 193 41 266 10 11 59 20 0 1 224098 16707 0 0 0 0 0 0 313 3278 297 0 4 94 2 0 1 224098 16707 0 0 0 0 0 0 312 407 254 0 0 99 0

AIX 5.3

$ vmstat 1

System configuration: lcpu=4 mem=1024MB ent=0.50

kthr memory page faults cpu----- ----------- ------------------------ ------------ ----------------------- r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec 0 0 149058 3383 0 0 0 0 0 0 14 123 193 0 1 99 0 0.01 1.8 0 0 149058 3383 0 0 0 0 0 0 7 27 166 0 1 99 0 0.01 1.2 0 0 149058 3383 0 0 0 0 0 0 4 24 148 0 1 99 0 0.01 1.3

Page 27: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

iostatPre-AIX v5.3

$ iostat 1

tty: tin tout avg-cpu: % user % sys % idle % iowait 0.3 -139.2 9.7 11.3 58.9 20.1

Disks: % tm_act Kbps tps Kb_read Kb_wrtnhdisk0 22.9 35.9 39.0 239329835 210138840hdisk1 19.5 29.9 38.4 131729903 242784636

AIX 5.3

$ iostat 1

System configuration: lcpu=4 drives=6 ent=0.50

tty: tin tout avg-cpu: % user % sys % idle % iowait physc % entc 0.0 50.0 0.0 0.3 99.7 0.0 0.0 1.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtnhdisk1 0.0 0.0 0.0 0 0hdisk0 0.0 0.0 0.0 0 0

Page 28: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

lparstatPre-AIX v5.3

NO SUCH COMMAND

AIX 5.3

$ lparstat 1

System configuration: type=Shared mode=Uncapped smt=On lcpu=4 mem=1024 psize=4 ent=0.50

%user %sys %wait %idle physc %entc lbusy vcsw phint----- ---- ----- ----- ----- ----- ------ ---- ----- 0.2 0.3 0.0 99.5 0.01 1.0 0.0 378 1 0.0 0.3 0.0 99.7 0.00 0.7 0.0 374 0 0.0 0.3 0.0 99.7 0.00 0.7 0.0 366 0

Page 29: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

topas – AIX v5.2

Topas Monitor for host: erp41p01 EVENTS/QUEUES FILE/TTYFri Jul 1 03:08:00 2005 Interval: 2 Cswitch 3384 Readch 217.3K Syscall 16922 Writech 42457Kernel 3.3 |## | Reads 153 Rawin 0User 75.6 |###################### | Writes 66 Ttyout 788Wait 0.1 |# | Forks 33 Igets 0Idle 21.0 |####### | Execs 30 Namei 619 Runqueue 5.5 Dirblk 0Network KBPS I-Pack O-Pack KB-In KB-Out Waitqueue 0.0lo0 357.0 938.0 938.0 358.0 358.0en2 75.7 132.0 135.0 54.9 96.9 PAGING MEMORYen0 10.2 55.0 34.0 13.5 6.9 Faults 4998 Real,MB 31743 Steals 0 % Comp 42.6Disk Busy% KBPS TPS KB-Read KB-Writ PgspIn 0 % Noncomp 47.9hdisk1 6.4 99.7 25.4 0.0 200.0 PgspOut 0 % Client 48.0hdisk0 4.9 95.7 24.4 0.0 192.0 PageIn 11hdisk201 0.4 8.0 0.5 16.0 0.0 PageOut 8 PAGING SPACEhdisk13 0.4 6.0 1.5 0.0 12.0 Sios 20 Size,MB 32768 % Used 9.7Name PID CPU% PgSp Owner NFS (calls/sec) % Free 90.2db2sysc 2093296 14.3 1.2 fiprdi ServerV2 0db2sysc 2338982 14.0 1.7 hrprdi ClientV2 0 Press:db2sysc 2932862 14.0 1.7 hrprdi ServerV3 0 "h" for helpdb2sysc 1228952 13.4 1.2 fiprdi ClientV3 0 "q" to quitdb2sysc 1212594 12.7 19.8 fiprdi

Page 30: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

topas – AIX v5.3

Topas Monitor for host: baltar EVENTS/QUEUES FILE/TTYFri Jul 1 02:44:19 2005 Interval: 2 Cswitch 151 Readch 5211 Syscall 239 Writech 1286Kernel 0.4 |# | Reads 6 Rawin 0User 0.2 |# | Writes 3 Ttyout 210Wait 0.0 |# | Forks 0 Igets 0Idle 99.4 |############################| Execs 0 Namei 22Physc = 0.01 %Entc= 1.2 Runqueue 0.0 Dirblk 0 Waitqueue 0.0Network KBPS I-Pack O-Pack KB-In KB-Outen0 0.7 14.0 3.0 0.8 0.7 PAGING MEMORYlo0 0.0 0.0 0.0 0.0 0.0 Faults 47 Real,MB 1023 Steals 0 % Comp 59.3Disk Busy% KBPS TPS KB-Read KB-Writ PgspIn 0 % Noncomp 40.8hdisk0 0.0 2.0 0.0 4.0 0.0 PgspOut 0 % Client 44.3hdisk2 0.0 0.0 0.0 0.0 0.0 PageIn 2hdisk1 0.0 0.0 0.0 0.0 0.0 PageOut 0 PAGING SPACEhdisk3 0.0 0.0 0.0 0.0 0.0 Sios 2 Size,MB 512 % Used 6.4Name PID CPU% PgSp Owner NFS (calls/sec) % Free 93.5topas 413718 0.0 1.1 root ServerV2 0sshd 540752 0.0 0.9 jes ClientV2 0 Press:dsmserv 536782 0.0 112.5 root ServerV3 0 "h" for helpgil 61470 0.0 0.1 root ClientV3 0 "q" to quitgetty 532508 0.0 0.4 root

Page 31: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

topas -L -- AIX v5.3

Interval: 2 Logical Partition: baltar Fri Jul 1 02:54:25 2005Psize: - Shared SMT ON Online Memory: 1024.0Ent: 0.50 Mode: UnCapped Online Logical CPUs: 4Partition CPU Utilization Online Virtual CPUs: 2%usr %sys %wait %idle physc %entc %lbusy app vcsw phint %hypv hcalls 0 0 0 99 0.0 1.10 0.00 - 741 3 0.0 0===============================================================================LCPU minpf majpf intr csw icsw runq lpa scalls usr sys _wt idl pc lcswCpu0 0 0 52 42 21 1 100 71 15 73 0 12 0.00 87Cpu1 0 0 34 0 0 0 0 0 0 13 0 87 0.00 86Cpu2 0 0 349 234 126 0 100 28 6 41 0 53 0.00 284Cpu3 0 0 24 0 0 0 0 0 0 6 0 94 0.00 284

Page 32: eServer p5 Systems: Performance Monitoring

32

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

AIX 5L V5.3

SUMA patch tool NIM enhancements

Data center management

POSIX Realtime APIs Linker/loader affinity “procmon” and trace GUI

Development environment

POWER5 support Simultaneous multithreading processor 1,024 disk volume group NFSv4

Enterprise scalability

IBM Advanced POWER Virtualization option Micro-Partitioning

–Virtual networking and storage–Partition Load Manager

Advanced accounting JFS2 file system shrink

Resource management

Page 33: eServer p5 Systems: Performance Monitoring

33

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

IBM AIX 5L V5.3 highlights

Jay Kru emcke I BM 20 0 3

Service Update Management Assistant (SUMA)– Policy-based automated download of fixes from IBM to the client’s fix

distribution center– Policy can include different types of fixes to retrieve

NIM enhancements– NIM communications security– Highly available NIM– Post-install configuration of Etherchannel and virtual IP address

JFS2 file system shrink NFSv4 support Quotas for JFS2 Scalability enhancements for fsck and

logredo

JFS2 additional scalability Installation performance improvements Dynamic large page pool size Kernel locking scalability LVM support for 1,024 disk

volume group

Page 34: eServer p5 Systems: Performance Monitoring

34

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

AIX 5L V5.3 accounting/charge-back

Allows IT to bill their users based on actual usage of resources

Allows accurate billing of costs associated with server consolidation environments

Administrators can associate applications with “projects” (cost centers)

Varying levels of accounting data can be collected from process-level to an entire system

Primarily focused on collecting resource usage data versus reporting capabilities

Jay Kru emcke I BM 20 0 3

Single OS image User programs are placed in classes: Classification basis: application name and/or user Measure and record resources utilization, such as CPU

Accounting class = Aprograms

Accounting class = Bprograms

Accountingclass = Cprograms

LPAR A / OS image AAccounting class = AMeasure resources (such as CPU) utilization for

the LPAR in which OS is runningRecord resource info into accounting records

LPAR B / OS image BAccounting

class = B

Hypervisor

LPAR C /OS image CAccounting

class = C

No changes to middleware or applications are needed

LPAR-based accounting Application-based accounting

Page 35: eServer p5 Systems: Performance Monitoring

35

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

AIX 5L, i5/OS, and Linux POWER5 support

RHEL AS 3SLES 9i5/OS V5R3AIX 5L V5.3AIX 5l V5.2

YYYYNMicropartitions

StaticY ( 1/10th )Y ( 1/10th )Y ( 1/10th )Y (1)Dynamic LPAR Processors

StaticStaticYYYDynamic LPAR Memory

NYYYYDynamic LPAR I/O

YYYYNSimultaneous multithreading

YYNYNVirtual Ethernet & SCSI

NYYYYPCI Hot-plug

NNYYYConcurrent Diagnostics

NYYYYLarge Page Support

YYYYYVLAN

Page 36: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Performance Tuning Toolsvmo - Manages Virtual Memory Manager tunable parameters.

vmstat - Reports virtual memory statistics.

ioo - Manages Input/Output tunable parameters.iostat - Reports Central Processing Unit (CPU) statistics, asynchronous

input/output (AIO) and input/output statistics for the entire system, adapters, tty devices, disks and CD-ROMs

no - Manages network tuning parameters.netstat - Shows network status.

nfso - Manages Network File System (NFS) tuning parameters.nfsstat - Displays statistical information about the Network File System (NFS)

and Remote Procedure Call (RPC) calls.

lvmo - Manages lvm pbuf tunable parameters.lvmstat - Reports input/output statistics for logical partitions, logical volumes

and volume groups. Also reports pbuf and blocked I/O statistics and allows pbuf allocation changes to volume groups.

Page 37: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

I/O Wait means what?"I/O Wait" (vmstat, iostat, sar,...) is a measurement of CPU idle time. It's the time(%) the CPU spends waiting for an I/O to complete before it can continue processing.

It's a common misconception that the CPU "I/O Wait" cycles are blocked, and can not be used by other processes. However, "I/O Wait" cycles are available for use by other process.

High %iowait has historically indicated a problem in I/O performance. However, due to advances in CPU performance, high %iowait may be a misleading indicator, especially in random I/O workloads. It's misleading because %iowait measures CPU performance, not I/O. To be precise, %iowait measures the percent of time the CPU is idle, but waiting for an I/O to complete. As such, it is only indirectly related to I/O performance, which can result in false conclusions. It is possible to have healthy system with nearly 100% iowait, or have a disk bottleneck with 0% iowait.

High %iowait is becoming more common as processor speeds increase. Gains in processor performance have significantly outpaced disk performance. While processor performance has doubled every 12 to 18 months, disk performance (in IOPS per disk) has remained relatively constant* . This imbalance has resulted in a trend toward higher %iowait on healthy systems.

Page 38: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

I/O Wait ExampleThe following example illustrates how faster CPU's can increase %iowait. Assume we upgrade a system with CPU's that are 4 times faster. All else remains unchanged.

Before CPU Upgrade

CPU time = 40 msIO time = 20 msTotal transaction time = CPU + IO = 40 + 20 = 60 ms%iowait = IO time/total time = 20/60 = 33%

After CPU Upgrade

CPU time = 40 ms/ 4 = 10 msIO time = 20 msTotal transaction time = CPU + IO = 10 + 20 = 30 ms%iowait = 20/30 = 66%

In this example, transaction performance (latency) doubled, despite a 2x increase in %iowait. In this case, the absolute value of %iowait is a misleading indicator of an I/O problems.

from: http://www.aixtips.com/AIXtip/iowait.htm

Page 39: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

General DB Tuning Recommendations- 64-bit kernel- 64-bit binaries- Use CIO on JFS2- If stuck on JFS, then use DIO- Tune vmo settings - CIO and DIO remove some need for this, but still needed - Be careful with maxperm%, maxclient% and minperm% - Don't use strict settings, - JFS uses maxperm%, JFS2 uses maxclient% - strict_maxclient is enabled by default, should unset - look at physical page-ins and page-outs to determine setting - Minimize (eliminate) physical I/Os- Tune no settings - TCP send & recv should be same on DB server and clients - some settings depend on the network speed

Page 40: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

General DB Tuning Recommendations (cont.)- Tune lvmo settings with raw LVs- Tune ioo settings, when cooked and raw - Especially true with JFS filesystems - numfsbufs tune to 500 to start is >200- Async I/O tuning if not using raw LVs- Bottleneck should always be the disk, unless heavy computational

workload - Make sure data is laid out correctly on the disks - Seperate index, logs, tables, tempspace onto different physical

disks

Page 41: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

vmtstat -v to determine LV and FS buffer blocking$ vmstat -v 262144 memory pages 243392 lruable pages 2463 free pages 2 memory pools 77901 pinned pages 80.0 maxpin percentage 20.0 minperm percentage 50.0 maxperm percentage 43.5 numperm percentage 105983 file pages 0.0 compressed percentage 0 compressed pages 47.4 numclient percentage 50.0 maxclient percentage 115387 client pages 0 remote pageouts scheduled 0 pending disk I/Os blocked with no pbuf 2534 paging space I/Os blocked with no psbuf 2740 filesystem I/Os blocked with no fsbuf 12449 client filesystem I/Os blocked with no fsbuf 0 external pager filesystem I/Os blocked with no fsbuf

Page 42: eServer p5 Systems: Performance Monitoring

IBM eServer p5 and pSeries

© 2004 IBM Corporation© 2005 IBM Corporation

Contact Information

John Sheehy

[email protected](352) 333-0112

https://www.e-techservices.com/