32
SP0701 - Building a GIS Book Release SP0901 - Changes since SP0701 1 Remove workflow data source and platform service times from workflow tab. 2 Include workflow chatter and software service times on CPT2006 tab (Requi 3 Simplify platform service time formulas (use local worksheet service time 4 Move network latency to new workflow chatter column. 5 Remove conditional format for latency in column I. 6 Make all site network bandwidth capacity cells white (user entry cell). 7 Add an additional Web server (EA for Enterprise Applications) 8 Include additional software service times for new Web server on Workflow 9 Include additional software and platform service times for new Web server 10 Changed platform tier network interface card background to match standard 11 Move Software Service times to columns AV:BD 12 Hide workflow loads analysis columns 13 Update CPT colors for more consistency (compatible with Office 2003 color 14 Remove "Unique Display traffic" column (unique traffic can be temporarily 15 Replace all queue time calculations with more accurate formulas (componen 16 Add additional servers when over 98% utilization 17 Add "think time" as a workflow parameter on workflow tab (batch process h 18 Change CPT2006!BH3 to Max DPM (includes workflow think time) 19 Increase the number of BLINK interval options 20 Relocated cell data validation lists to rows near the bottom of CPT2006 w 21 Move Configuration lookup array to CPT worksheet SP1001 - Changes since SP0901 1 Introduce dynamic productivity function cell BI2 (DEFAULT,ADJUST,SAVE) 2 CPT2006!BI2 turns RED while CPT2006!BI2 is in DEFAULT setting (move to AD 3 CPT2006! Lookup and reference cells moved to far bottom of spreadsheet > 4 CPT2006! Platform Utilization Profile source data moved to open columns t 5 Add new PVT tab - compute platform service times based on measured throu 6 Update queuing model to provide closer match to response time on PVT test 7 ArcGIS Image server target service times updated to match workbook archit 8 Modified the dynamic productive function so it will converge rapidly on t SP1101 - Changes since SP1001 1 Upgrade platform service time queuing formula: sU/((1+kU(c-1))(1-U) whe 2 Upgrade network service time queuing formula: sU/((1+nU)(1-U) where n i 3 Cleanup hardware display modules {raise one row so hidden tier numbers do 4 Expose minimum think time in column BE {you can change minimum think time 5 Color DPM/Client column and join with DPM column. {productivity automati 6 Change queuing formulas in PVT tab to match CPT changes above. 7 Move Reset and Blink cells to BF2:BG2 at top of user think time columns { 8 Modified Workflow tab to separate Desktop and WTS service times {setup to 9 Modified CPT platform service times to include desktop and WTS service ti 10 Allow combination of WTS and Server service times for complex workflows. 11 Changed PVT, Hardware, and Workflow tabs to BLUE. {easier to locate these 12 Include max utilization limit above the platform Fix Node cells.{ user ca 13 Modify PVT formulas to properly adjust productivity and maintain minimum 14 Move hardware tier NIC card and traffic to right of hardware tiers {clean

Capacity Planning June

Embed Size (px)

DESCRIPTION

cp

Citation preview

Page 1: Capacity Planning June

SP0701 - Building a GIS Book Release

SP0901 - Changes since SP07011 Remove workflow data source and platform service times from workflow tab.2 Include workflow chatter and software service times on CPT2006 tab (Requirements module)3 Simplify platform service time formulas (use local worksheet service time values)4 Move network latency to new workflow chatter column.5 Remove conditional format for latency in column I.6 Make all site network bandwidth capacity cells white (user entry cell).7 Add an additional Web server (EA for Enterprise Applications)8 Include additional software service times for new Web server on Workflow tab9 Include additional software and platform service times for new Web server on CPT2006 tab

10 Changed platform tier network interface card background to match standard platform colors11 Move Software Service times to columns AV:BD12 Hide workflow loads analysis columns13 Update CPT colors for more consistency (compatible with Office 2003 colors)14 Remove "Unique Display traffic" column (unique traffic can be temporarily inserted in "client traffic" columns AV)15 Replace all queue time calculations with more accurate formulas (component throughput must be less than capacity)16 Add additional servers when over 98% utilization17 Add "think time" as a workflow parameter on workflow tab (batch process has zero think time)18 Change CPT2006!BH3 to Max DPM (includes workflow think time)19 Increase the number of BLINK interval options20 Relocated cell data validation lists to rows near the bottom of CPT2006 worksheet21 Move Configuration lookup array to CPT worksheet

SP1001 - Changes since SP09011 Introduce dynamic productivity function cell BI2 (DEFAULT,ADJUST,SAVE)2 CPT2006!BI2 turns RED while CPT2006!BI2 is in DEFAULT setting (move to ADJUST to reduce productivity and to SAVE once calculations are stable - don't leave in ADJUST)3 CPT2006! Lookup and reference cells moved to far bottom of spreadsheet > well below row 100 (out of the way)4 CPT2006! Platform Utilization Profile source data moved to open columns to stay active during copy sheet function. 5 Add new PVT tab - compute platform service times based on measured throughput and platform utilization6 Update queuing model to provide closer match to response time on PVT test results.7 ArcGIS Image server target service times updated to match workbook architecture (does not include processing for ArcGIS Server Image Service)8 Modified the dynamic productive function so it will converge rapidly on the adjusted value. Blink of "1" works best for most calculations - smaller blink may be required for very high capacity settings.

SP1101 - Changes since SP1001 1 Upgrade platform service time queuing formula: sU/((1+kU(c-1))(1-U) where k is adjustable Qfactor in BI2 (default k=1)2 Upgrade network service time queuing formula: sU/((1+nU)(1-U) where n is the adjustable NQfactor in BJ2 (default n=1)3 Cleanup hardware display modules {raise one row so hidden tier numbers do not show}4 Expose minimum think time in column BE {you can change minimum think time on the CPT tab to modify workflow productivity}5 Color DPM/Client column and join with DPM column. {productivity automatically calculated using the ADJUST function} 6 Change queuing formulas in PVT tab to match CPT changes above.7 Move Reset and Blink cells to BF2:BG2 at top of user think time columns {easier access}8 Modified Workflow tab to separate Desktop and WTS service times {setup to include desktop service times in WTS and Web workflows}9 Modified CPT platform service times to include desktop and WTS service times with Web servers. {allows desktop service times to be included with WTS and Server workflows}

10 Allow combination of WTS and Server service times for complex workflows. {full DC load applied to Desktop/WTS application and SS tiers}11 Changed PVT, Hardware, and Workflow tabs to BLUE. {easier to locate these reference tabs in a design workbook}12 Include max utilization limit above the platform Fix Node cells.{ user can set peak platform utilization rates}13 Modify PVT formulas to properly adjust productivity and maintain minimum think time during high capacity loads.14 Move hardware tier NIC card and traffic to right of hardware tiers {cleanup and simplify display}

Page 2: Capacity Planning June

15 Include PVT workflow on the Workflow tab {make it easy to copy row and paste values to include a custom PVT workflow}16 Reorganize platform NIC bandwidthand FIX Node location for better user interface.17 Include NIC traffic with conditional statement {RED when traffic is over 50% bandwidth}18 NIC traffic setting - server NIC card is either WEB or DBMS. Total platform traffic shown white over black background.19 Include File Share NIC traffic below DBMS server (file share nodes set in fix nodes column)20 Include test setting in Design tab (CK3) to remove default user productivity (6,10) {all workflows adjusted to minimum think time}

SP1201 - Changes since SP1101 1 Rename "CPT2006" TAB to "Design" TAB {more descriptive and easier to reference}2 Introduce a new slim (3 row) hardware tier design {more consolidated display environment}3 Expand design tab to support up to 10 optional hardware tier {more adaptive data center configurations}4 Allow platform tier name changes {custom platform tier naming convention - 3-4 letter abbreviation: full name}5 Add a new software install section {software components for each workflow can be installed on platforms tier of choice}6 Include DEFAULT software install option {simplify install requirements by providing single default installation on LAN row}7 DEFAULT software selection (all rows) turn RED if undavailable platform assignment {platform name changes may require updating DEFAULT platform assignmentst}8 Include workflow software component loads profile on Workflow TAB {Exposes processing distribution profile across component software stack}9 Automate PVT workflow to target workflow translation using PVT Workflow Platform Load Profile {Estimate workflow component software service times from total processing time}

10 Include workflow performance adjustments for selected DBMS {adjust performance based on database selection}11 Assign and adjust Arc08 software service times based on data source selection {File data source assigns query SDE/DBMS processing loads to application}12 Assign Arc08 platform service times based on Install {Install software on platforms}13 Include workflow performance adjustments for virtual server and SOC deployment on Linux and Solaris platforms {performance estimates based on server platform environment}14 Include optional percentage platform load adjustments that apply to those in 13 {ability to customize for specific client environments}15 NIC card traffic adjustment for Web server tier {provides option to use client traffic when configuring a middle tier Web server}16 Introduce test vs live switch in design tab cell F1. {With the "test" selection, all workflows will ADJUST to their minimum think time}17 Include workflow short name label option in Column A {Option to include short names (before _, included in Workflow Performance Summary chart}18 Change name of PVT tab to Test tab.19 Include new Favorites tab, identify workflow and platform favorites used in the design {simplify design workflow and platform selection}20 Include workflow definitions on Workflow tab {provided in Workflow column Y {way to document logic for workflow performance targets}21 Provide option to use Standard Workflows or Favorite workflows in Design Cell B4 {let user select lookup list for workflow selection}22 Workflow definitions included with workflow selection on Favorites tab {improve useful information on favorites tab}

SP0201 - Changes since SP1201 1 Include percent network capacity in Column R on the Display tab {network utilization visibility in standard design view}2 Include ArcGIS Desktop Light Dynamic and Medium Dynamic ESRI Standard Workflows {feedback from visit with CenterPoint Energy}3 Move Data Source column to Software Configuration module (column R) {make room for new services column}4 Add a column for peak concurrent services (column E) {allows configuration of client, services, and complex mixed workflows on the design tab}5 Add row for total clients and service (row 20) {used for user needs entry validation}6 Include minimum and high available platform selection (column I below each platform tier - default above top tier)7 Expand memory calculations to accommodate any combination of platform selections.8 Include Web Server total licensed core (above platform tier) based on installation of ADF, SOC, and SDE software {identify licensed core for configured production environment}9 Provide 4 spreadsheet group selections (group 1 for 5 tier designs and groups 2-4 for 10 tier designs) {most designs use less than 5 tier}

Note: Column group 3 or 4 must be open to move or copy all graphic references to a new sheet.10 Allow direct service time entry in Test tab column E for each platform tier {overides throughput and utilization values}.11 Identify ArcGIS Explorer 500 Client standard workflow (show client loads only){separate client shows proper user response from local cache data source}12 Identify AGS9.3 Globe Service as a separate service - combine with desktop client to support full workflow.13 Identify AGS9.3 Mobile ADF Client as a standard workflow (show client loads only){separate client shows proper user response from local cache data source}14 Identify AGS9.3 Mobile ADF Service as a separate synchronization service - no desktop workflow loads (combine with Mobile ADF client for full composite workflow)15 Show pink background in platform selection cell with virtual server platform selection

Page 3: Capacity Planning June

SP0401 - Changes since SP03011 Arrange platform list by year on the hardware tab {simplify platform selection lookup}2 Include platform specifications on Favorites tab {expand information available on Favorites tab}3 Include new 2009 Server platforms on hardware tab {significant Intel performance improvements}

SP0501 - Changes since SP0201Design Tab

1 SDE can be assigned to SOC container machine without increasing memory recommendations.2 NIC bandwidth turns RED if traffic greater than 50 percent network Bandwidth {highlight potential NIC performance problems}3 Show pink background in platform selection cell when service time is adjusted by virtual or unix machine selection.4 Include VMware, HyperV, and Xen virtual server selection options.5 Include platform tier operating system (Windows, Linux, Solaris) install in column N6 Adjust SOC platform service time when Solaris or Linux platform is selected. {SOC Arcobjects with MainWin are at least 2 times slower on Solaris and Linux platforms}7 Peak users per node is reduced based on platform rollover settings.8 Change throughput title in Design 3D:3E to DPM/TPM {CPT Displays Per Minute (DPM) is the same as test Transactions Per Minute (TPM)}

Workflow Tab 9 Include Standard ESRI Workflows for ADF and REST clients with mixed dynamic and map cache data source {workflows with mixed data source are more popular}

Hardware Tab 10 Arrange platform list by year on the hardware tab {simplify platform selection lookup}11 Include new 2009 Server platforms on hardware tab {significant Intel performance improvements}12 Reorganize platform list starting with current year on top {simplify platform selection}13 Arrange Intel and AMD platforms in separate platform categories {simplify platform selection}14 Provide separate desktop and server lookup lists {simplify platform selection}15 Include area to define favorite platform candidates (exposed on top of server lookup list and bottom of desktop lookup list) {simplify platform selection}

Favorites Tab16 Include platform specifications on Favorites tab {expand information available on Favorites tab}

Page 4: Capacity Planning June

CPT2006!BI2 turns RED while CPT2006!BI2 is in DEFAULT setting (move to ADJUST to reduce productivity and to SAVE once calculations are stable - don't leave in ADJUST)

ArcGIS Image server target service times updated to match workbook architecture (does not include processing for ArcGIS Server Image Service)Modified the dynamic productive function so it will converge rapidly on the adjusted value. Blink of "1" works best for most calculations - smaller blink may be required for very high capacity settings.

Expose minimum think time in column BE {you can change minimum think time on the CPT tab to modify workflow productivity}

Modified Workflow tab to separate Desktop and WTS service times {setup to include desktop service times in WTS and Web workflows}Modified CPT platform service times to include desktop and WTS service times with Web servers. {allows desktop service times to be included with WTS and Server workflows}Allow combination of WTS and Server service times for complex workflows. {full DC load applied to Desktop/WTS application and SS tiers}

Page 5: Capacity Planning June

Include test setting in Design tab (CK3) to remove default user productivity (6,10) {all workflows adjusted to minimum think time}

DEFAULT software selection (all rows) turn RED if undavailable platform assignment {platform name changes may require updating DEFAULT platform assignmentst}Include workflow software component loads profile on Workflow TAB {Exposes processing distribution profile across component software stack}Automate PVT workflow to target workflow translation using PVT Workflow Platform Load Profile {Estimate workflow component software service times from total processing time}

Assign and adjust Arc08 software service times based on data source selection {File data source assigns query SDE/DBMS processing loads to application}

Include workflow performance adjustments for virtual server and SOC deployment on Linux and Solaris platforms {performance estimates based on server platform environment}Include optional percentage platform load adjustments that apply to those in 13 {ability to customize for specific client environments}NIC card traffic adjustment for Web server tier {provides option to use client traffic when configuring a middle tier Web server}Introduce test vs live switch in design tab cell F1. {With the "test" selection, all workflows will ADJUST to their minimum think time}Include workflow short name label option in Column A {Option to include short names (before _, included in Workflow Performance Summary chart}

Include new Favorites tab, identify workflow and platform favorites used in the design {simplify design workflow and platform selection}Include workflow definitions on Workflow tab {provided in Workflow column Y {way to document logic for workflow performance targets}Provide option to use Standard Workflows or Favorite workflows in Design Cell B4 {let user select lookup list for workflow selection}

Include ArcGIS Desktop Light Dynamic and Medium Dynamic ESRI Standard Workflows {feedback from visit with CenterPoint Energy}

Add a column for peak concurrent services (column E) {allows configuration of client, services, and complex mixed workflows on the design tab}

Include Web Server total licensed core (above platform tier) based on installation of ADF, SOC, and SDE software {identify licensed core for configured production environment}Provide 4 spreadsheet group selections (group 1 for 5 tier designs and groups 2-4 for 10 tier designs) {most designs use less than 5 tier}

Identify ArcGIS Explorer 500 Client standard workflow (show client loads only){separate client shows proper user response from local cache data source}

Identify AGS9.3 Mobile ADF Client as a standard workflow (show client loads only){separate client shows proper user response from local cache data source}Identify AGS9.3 Mobile ADF Service as a separate synchronization service - no desktop workflow loads (combine with Mobile ADF client for full composite workflow)

Page 6: Capacity Planning June

NIC bandwidth turns RED if traffic greater than 50 percent network Bandwidth {highlight potential NIC performance problems}

Adjust SOC platform service time when Solaris or Linux platform is selected. {SOC Arcobjects with MainWin are at least 2 times slower on Solaris and Linux platforms}

Change throughput title in Design 3D:3E to DPM/TPM {CPT Displays Per Minute (DPM) is the same as test Transactions Per Minute (TPM)}

Include Standard ESRI Workflows for ADF and REST clients with mixed dynamic and map cache data source {workflows with mixed data source are more popular}

Include area to define favorite platform candidates (exposed on top of server lookup list and bottom of desktop lookup list) {simplify platform selection}

Page 7: Capacity Planning June

Relative Performance Sizing Calculate from Svc TimeModel Number SRint2006 Core Per Core Svc Time DPH Relative

Intel Core 2 Duo T7700 2 core (1 chip) 2400 MHz 27.0 2 13.50 1.500 4,800 CapacityArc08 4 core (2 chip) Baseline 70.0 4 17.50 1.157 12,444 2.592593

1 2 3 4 5 6 7# of Core / SPECint_Rate2000 SPECint_Rate2006

Hardware Platforms Core Chip Rate 2000 Per Core Rate 2006 Per Core ==== Desktop Candidates ====== ==== 2009 Desktop Platforms ====Intel Core i7-920 4 core (1 chip) 2667 MHz 4 4 214.2 53.6 102.0 25.5Intel Core i7-940 4 core (1 chip) 2933 MHz 4 4 226.8 56.7 108.0 27.0Intel Core i7-965 Extreme 4 core (1 chip) 3200 MHz 4 4 245.7 61.4 117.0 29.3Intel Core i7-965 Extreme 4 core (1 chip) 3733 MHz 4 4 256.2 64.1 122.0 30.5Xeon W5580 8 core (2 chip) 3200 MHz 8 4 495.6 62.0 236.0 29.5 ==== 2008 Desktop Platforms ====Intel Core 2 Duo 2 core (1 chip) 2333 MHz 2 2 50.2 25.1 23.9 12.0Intel Core 2 Duo T7700 2 core (1 chip) 2400 MHz 2 2 56.7 28.4 27.0 13.5Intel Core 2 Duo 2 core (1 chip) 2600 MHz 2 2 66.2 33.1 31.5 15.8Intel Core 2 Duo 2 core (1 chip) 3000 MHz 2 2 70.8 35.4 33.7 16.9Intel Core 2 Duo 2 core (1 chip) 3166 MHz 2 2 80.4 40.2 38.3 19.2Intel Core 2 Quad Q9650 4 core (1 chip) 3000 MHz 4 4 167.2 41.8 79.6 19.9Intel Core 2 Extreme QX9770 4 core (1 chip) 3000 MHz 4 4 167.2 41.8 79.6 19.9 ==== Favorite Platform Candidates ======Intel Core i7-940 4 core (1 chip) 2933 MHz 4 4 226.8 56.7 108.0 27.0Xeon X5570 8 core (2 chip) 2933 MHz 8 4 487.2 60.9 232.0 29.0 ==== Intel Windows Server Candidates ====== ==== 2009 Server Platforms ====Xeon X5520 4 core (1 chip) 2267 MHz 4 4 196.4 49.1 93.5 23.4Xeon X5520 8 core (2 chip) 2267 MHz 8 4 392.7 49.1 187.0 23.4Xeon X5530 4 core (1 chip) 2400 MHz 4 4 203.7 50.9 97.0 24.3Xeon X5530 8 core (2 chip) 2400 MHz 8 4 407.4 50.9 194.0 24.3Xeon X5540 4 core (1 chip) 2533 MHz 4 4 211.1 52.8 100.5 25.1Xeon X5540 8 core (2 chip) 2533 MHz 8 4 422.1 52.8 201.0 25.1Xeon X5550 4 core (1 chip) 2667 MHz 4 4 237.3 59.3 113.0 28.3Xeon X5550 8 core (2 chip) 2667 MHz 8 4 474.6 59.3 226.0 28.3Xeon X5560 4 core (1 chip) 2800 MHz 4 4 239.4 59.9 114.0 28.5Xeon X5560 8 core (2 chip) 2800 MHz 8 4 478.8 59.9 228.0 28.5Xeon X5570 4 core (1 chip) 2933 MHz 4 4 243.6 60.9 116.0 29.0Xeon X5570 8 core (2 chip) 2933 MHz 8 4 487.2 60.9 232.0 29.0 ==== 2008 Server Platforms ====Xeon X5270 2 core (1 chip) 3500(6) MHz 2 2 94.9 47.5 45.2 22.6Xeon X5270 4 core (2 chip) 3500(6) MHz 4 2 179.1 44.8 85.3 21.3Xeon E5440 8 core (2 chip) 2833(12) MHz 8 4 224.7 28.1 107.0 13.4Xeon X5450 4 core (1 chip) 3000(12) MHz 4 4 122.4 30.6 58.3 14.6Xeon X5450 8 core (2 chip) 3000(12) MHz 8 4 226.8 28.4 108.0 13.5Xeon X5460 4 core (1 chip) 3166(12) MHz 4 4 130.4 32.6 62.1 15.5Xeon X5460 8 core (2 chip) 3166(12) MHz 8 4 239.4 29.9 114.0 14.3Xeon X5470 4 core (1 chip) 3333(12) MHz 4 4 171.4 42.8 81.6 20.4Xeon X5470 8 core (2 chip) 3333(12) MHz 8 4 291.9 36.5 139.0 17.4

SPEC Web Site

Page 8: Capacity Planning June

==== 2007 Server Platforms ====Xeon X5260 2 core (1 chip) 3333(6) MHz 2 2 77.9 39.0 37.1 18.6Xeon X5260 4 core (2 chip) 3333(6) MHz 4 2 147.2 36.8 70.1 17.5Xeon E5310 4 core (1 chip) 1600(8) MHz 4 4 66.5 16.6 27.7 6.9Xeon E5310 8 core (2 chip) 1600(8) MHz 8 4 133.0 16.6 55.4 6.9Xeon E5320 4 core (1 chip) 1866(8) MHz 4 4 74.0 18.5 30.2 7.6Xeon E5320 8 core (2 chip) 1866(8) MHz 8 4 148.0 18.5 60.4 7.6Xeon E5335 4 core (1 chip) 2000(8) MHz 4 4 82.5 20.6 36.0 9.0Xeon E5335 8 core (2 chip) 2000(8) MHz 8 4 165.0 20.6 69.0 8.6Xeon E5345 4 core (1 chip) 2333(8) MHz 4 4 92.5 23.1 37.5 9.4Xeon E5345 8 core (2 chip) 2333(8) MHz 8 4 185.0 23.1 75.0 9.4Xeon X5355 2 core (0.5 chip) 2666(8) MHz 2 4 46.6 23.3 22.2 11.1Xeon X5355 4 core (1 chip) 2666(8) MHz 4 4 109.0 27.3 44.4 11.1Xeon X5355 6 core (1.5 chip) 2666(8) MHz 6 4 200.0 33.3 66.6 11.1Xeon X5355 8 core (2 chip) 2666(8) MHz 8 4 200.0 25.0 80.9 10.1Xeon X5365 4 core (1 chip) 3000(8) MHz 4 4 121.0 30.2 57.6 14.4Xeon X5365 8 core (2 chip) 3000(8) MHz 8 4 206.0 25.8 98.1 12.3Xeon X7350 16 core (4 chip) 2933(8) MHz 16 4 373.8 23.4 178.0 11.1 ==== 2006 Server Platforms ====Xeon 5110 2 core (1 chip) 1600(4) MHz 2 2 35.7 17.8 17.0 8.5Xeon 5110 4 core (2 chip) 1600(4) MHz 4 2 71.3 17.8 32.9 8.2Xeon 5120 2 core (1 chip) 1800(2) MHz 2 2 37.5 18.8 19.8 9.9Xeon 5120 4 core (2 chip) 1866(4) MHz 4 2 81.6 20.4 36.3 9.1Xeon 5130 2 core (1 chip) 2000(4) MHz 2 2 44.3 22.2 21.1 10.5Xeon 5130 4 core (2 chip) 2000(4) MHz 4 2 88.6 22.2 41.0 10.3Xeon 3050 2 core (1 chip) 2133(4) MHz 2 2 49.1 24.6 23.4 11.7Xeon 5140 2 core (1 chip) 2333(4) MHz 2 2 50.0 25.0 23.8 11.9Xeon 5140 4 core (2 chip) 2333(4) MHz 4 2 100.0 25.0 45.2 11.3Xeon E7210 2 core (1 chip) 2400(4) MHz 2 2 54.2 27.1 27.1 13.6Xeon 5150 2 core (1 chip) 2666(4) MHz 2 2 55.0 27.5 26.2 13.1Xeon 5150 4 core (2 chip) 2666(4) MHz 4 2 110.0 27.5 52.4 13.1Xeon 3070 2 core (1 chip) 2667(4) MHz 2 2 59.3 29.7 29.1 14.6Xeon E7220 2 core (1 chip) 2933(8) MHz 2 2 63.5 31.8 29.7 14.9Xeon 5160 2 core (1 chip) 3000(4) MHz 2 2 60.0 30.0 28.6 14.3Xeon 5160 4 core (2 chip) 3000(4) MHz 4 2 120.0 30.0 53.7 13.4 ==== 2005 Server Platforms ====Xeon 2 core (1 chip) 3200 MHz 2 2 36.7 18.4 18.9 9.5Xeon 4 core (2 chip) 3200 MHz 4 2 73.2 18.3 34.9 8.7Xeon 2 core (1 chip) 3400 MHz 2 2 38.7 19.4 19.8 9.9Xeon 2 core (1 chip) 3600 MHz 2 2 40.7 20.4 20.6 10.3Xeon 2 core (1 chip) 3730 MHz 2 2 43.7 21.9 20.8 10.4Xeon 4 core (2 chip) 3730 MHz 4 2 81.0 20.3 38.6 9.6 ==== 2004 Server Platforms ====Xeon MP 2 core (2 chip) 3000 MHz 2 1 35.1 17.6 16.7 8.4Xeon MP 4 core (4 chip) 3000 MHz 4 1 61.0 15.3 29.0 7.3Xeon MP 8 core (8 chip) 3000 MHz 8 1 107.0 13.4 51.0 6.4Xeon MP 4 core (4 chip) 3330 MHz 4 1 72.2 18.1 34.4 8.6Xeon 1 core (1 chip) 3400 MHz 1 1 20.7 20.7 9.9 9.9Xeon 2 core (2 chip) 3400 MHz 2 1 38.9 19.5 18.5 9.3Xeon 1 core (1 chip) 3600 MHz 1 1 21.8 21.8 10.4 10.4

Page 9: Capacity Planning June

Xeon 2 core (2 chip) 3600 MHz 2 1 40.6 20.3 19.3 9.7Xeon 1 core (1 chip) 3800 MHz 1 1 23.1 23.1 11.0 11.0Xeon 2 core (2 chip) 3800 MHz 2 1 42.9 21.5 20.4 10.2 ==== 2003 Server Platforms ====Xeon MP 1 core (1 chip) 2000 MHz 1 1 9.8 9.8 4.7 4.7Xeon MP 2 core (2 chip) 2000 MHz 2 1 18.3 9.2 8.7 4.4Xeon MP 4 core (4 chip) 2000 MHz 4 1 34.7 8.7 16.5 4.1Xeon MP 6 core (6 chip) 2000 MHz 6 1 42.2 7.0 20.1 3.3Xeon MP 8 core (8 chip) 2000 MHz 8 1 49.6 6.2 23.6 3.0Xeon 2 core (1 chip) 2667(1) MHz 2 2 26.8 13.4 12.8 6.4Xeon 1 core (1 chip) 2800 MHz 1 1 17.7 17.7 8.4 8.4PentiumD 2 core (1 chip) 2800 MHz 2 2 32.5 16.3 17.1 8.6Xeon 2 core (2 chip) 2800 MHz 2 1 31.4 15.7 15.0 7.5Xeon 2 core (1 chip) 2800 MHz 2 2 32.5 16.3 17.1 8.6Xeon 4 core (2 chip) 2800 MHz 4 2 59.2 14.8 28.2 7.0PentiumD 2 core (1 chip) 3000 MHz 2 2 34.6 17.3 18.0 9.0Xeon 1 core (1 chip) 3200 MHz 1 1 19.7 19.7 9.4 9.4Xeon 2 core (2 chip) 3200 MHz 2 1 37.1 18.6 17.7 8.8 ==== AMD Windows Server Candidates ====== ==== 2009 Server Platforms ====AMD 2380 4 core (1 chip) 2500 MHz 8 2 210.0 26.3 105.0 13.1AMD 2382 4 core (1 chip) 2600 MHz 8 2 220.0 27.5 110.0 13.8AMD 2384 4 core (1 chip) 2700 MHz 4 2 114.2 28.6 57.1 14.3AMD 2384 8 core (2 chip) 2700 MHz 8 2 224.0 28.0 112.0 14.0AMD 2386 8 core (2 chip) 2800 MHz 8 2 232.0 29.0 116.0 14.5AMD 2389 8 core (2 chip) 29700 MHz 8 2 234.0 29.3 117.0 14.6 ==== 2008 Server Platforms ====AMD 2222 2 core (1 chip) 3000 MHz 2 2 55.4 27.7 27.7 13.9AMD 2222 4 core (2 chip) 3000 MHz 4 2 110.0 27.5 55.0 13.8AMD 2384 4 core (1 chip) 2700 MHz 4 2 114.2 28.6 57.1 14.3AMD 2384 8 core (2 chip) 2700 MHz 8 2 224.0 28.0 112.0 14.0 ==== 2007 Server Platforms ====AMD 8356 8 core (2 chips) 2300 MHz 8 4 186.5 23.3 88.8 11.1AMD 8356 16 core (4 chips) 2300 MHz 16 4 336.0 21.0 160 10AMD 2224 2 core (1 chip) 3200 MHz 2 2 56.7 28.4 27.0 13.5AMD 2224 4 core (2 chip) 3200 MHz 4 2 118.0 29.5 56.2 14.1 ==== 2006 Server Platforms ====AMD 2 core (1 chip) 2800 MHz 2 2 45.2 22.6 21.5 10.8AMD 4 core (2 chip) 2800 MHz 4 2 90.3 22.6 43.0 10.8AMD 1 core (1 chip) 3000 MHz 1 1 23.5 23.5 11.2 11.2AMD 2 core (2 chip) 3000 MHz 2 1 46.9 23.5 22.3 11.2AMD 8216 8 core (4 chips) 2400 MHz 8 2 168.2 21.0 80.1 10AMD 8218 8 core (4 chips) 2600 MHz 8 2 181.2 22.7 86.3 10.8AMD 8220 8 core (4 chips) 2800 MHz 8 2 186.9 23.4 89 11.1AMD 8220SE 8 core (4 chips) 2800 MHz 8 2 183.1 22.9 87.2 10.9AMD 8222SE 4 core (2 chips) 3000 MHz 4 2 105.8 26.5 50.4 12.6AMD 8222SE 8 core (4 chips) 3000 MHz 8 2 201.8 25.2 96.1 12 ==== 2005 Server Platforms ====AMD 2 core (1 chip) 2400 MHz 2 2 37.2 18.6 17.7 8.9AMD 4 core (2 chip) 2400 MHz 4 2 75.1 18.8 35.8 8.9

Page 10: Capacity Planning June

AMD 8 core (4 chip) 2400 MHz 8 2 144.0 18.0 68.6 8.6AMD 1 core (1 chip) 2600 MHz 1 1 20.1 20.1 9.5 9.5AMD 2 core (2 chip) 2600 MHz 2 1 40.1 20.1 19.1 9.5AMD 4 core (4 chip) 2600 MHz 4 1 77.5 19.4 36.9 9.2AMD 1 core (1 chip) 2800 MHz 1 1 22.3 22.3 10.6 10.6AMD 2 core (2 chip) 2800 MHz 2 1 44.5 22.3 21.2 10.6AMD 4 core (4 chip) 2800 MHz 4 1 84.7 21.2 40.3 10.1AMD 2 core (1 chip) 2600 MHz 2 2 40.7 20.4 19.4 9.7AMD 4 core (2 chip) 2600 MHz 4 2 80.9 20.2 38.5 9.6 ==== Performance Baselines ======Arc08 4 core (2 chip) Baseline 4 2 147.0 36.8 70.0 17.5Arc07 4 core (2 chip) Baseline 4 2 117.6 29.4 56.0 14.0Arc06 2 core (2 chip) Baseline 2 1 44.0 22.0 21.0 10.5Arc04/05 2 core (2 chip) Baseline 2 1 36.0 18.0 17.1 8.6Arc03 2 core (2 chip) Baseline 2 1 18.0 9.0 8.6 4.3 ==== Test Platforms ======HP Workstation 4100 1 core (1 chip) 2800 MHz 1 1 12.3 12.3 5.9 5.9HP DL380 Intel Xeon 2 core (2 chip) 3200 MHz 2 1 28.2 14.1 13.4 6.7Dell Xeon 1 core (1 chip) 3400 MHz 1 1 20.7 20.7 9.9 9.9Dell Xeon X5450 2 core (0.5 chip) 3000(12) MHz 2 4 57.2 28.6 27.3 13.6Dell Xeon X5450 4 core (1 chip) 3000(12) MHz 4 4 114.5 28.6 54.5 13.6Dell Xeon X5450 8 core (2 chip) 3000(12) MHz 8 4 228.9 28.6 109.0 13.6 ==== Sun Candidates ======Sun EM4000 8 core (4 chip) 2150 MHz 8 2 144.1 18.0 68.6 8.6Sun EM5000 16 core (8 chip) 2150 MHz 16 2 281.4 17.6 134.0 8.4Sun EM8000 32 core (16 chip) 2400 MHz 32 2 625.8 19.6 298.0 9.3Sun EM9000 48 core (24 chip) 2400 MHz 48 2 893.6 18.6 425.5 8.9Sun EM9000 64 core (32 chip) 2400 MHz 64 2 1,161.3 18.1 553.0 8.6Sun EM9000 96 core (48 chip) 2400 MHz 96 2 1,747.2 18.2 832.0 8.7Sun EM9000 124 core (64 chip) 2400 MHz 128 2 2,333.1 18.2 1,111.0 8.7Sun Fire V490 2 core (1 chip) 2100 MHz 2 2 37.6 18.8 17.9 9.0Sun Fire V490 4 core (2 chip) 2100 MHz 4 2 75.3 18.8 35.9 9.0Sun Fire V890 8 core (4 chip) 2100 MHz 8 2 150.6 18.8 71.7 9.0Sun Fire V890 16 core (8 chip) 2100 MHz 16 2 296.1 18.5 141.0 8.8Sun Fire V490 2 core (1 chip) 1800 MHz 2 2 26.0 13.0 12.4 6.2Sun Fire V490 4 core (2 chip) 1800 MHz 4 2 52.0 13.0 24.8 6.2Sun Fire V890 8 core (4 chip) 1800 MHz 8 2 104.0 13.0 49.5 6.2Sun Fire V890 16 core (8 chip) 1800 MHz 16 2 200.0 12.5 95.2 6.0Sun Fire V880 8 core (8 chip) 1050 MHz 8 1 49.1 6.1 23.4 2.9Sun Fire 280R 2 core (2 chip) 1015 MHz 2 1 11.7 5.9 5.6 2.8Sun Fire V890 8 core (4 chip) 1500 MHz 8 2 87.4 10.9 41.6 5.2Sun Fire V480 4 core (4 chip) 1200 MHz 4 1 31.7 7.9 15.1 3.8Sun Fire V480 8 core (8 chip) 1200 MHz 4 1 31.7 7.9 15.1 3.8Sun Fire V480 16 core (16 chip) 1200 MHz 4 1 31.7 7.9 15.1 3.8Sun Fire V440 2 core (2 chip) 1600 MHz 2 1 19.5 9.8 9.3 4.6Sun Fire V440 4 core (4 chip) 1600 MHz 4 1 38.7 9.7 18.4 4.6Sun Fire V490 2 core (1 chip) 1350 MHz 2 2 16.5 8.3 7.9 3.9Sun Fire V490 4 core (2 chip) 1350 MHz 4 2 32.9 8.2 15.7 3.9Sun Fire V890 8 core (4 chip) 1350 MHz 8 2 65.2 8.2 31.0 3.9Sun Fire V890 16 core (8 chip) 1350 MHz 16 2 131.0 8.2 62.4 3.9

Page 11: Capacity Planning June

Sun Fire E4900 8 core (4 chip) 1500 MHz 8 2 86.7 10.8 41.3 5.2Sun Fire E4900 16 core (8 chip) 1500 MHz 16 2 173.0 10.8 82.4 5.1Sun Fire E4900 24 core (12 chip) 1500 MHz 24 2 257.0 10.7 122.4 5.1Sun Fire E4900 48 core (24 chip) 1500 MHz 48 2 492.0 10.3 234.3 4.9 ==== HP Candidates ======Itanium 2 core (2 chip) 1600 MHz 3 1 35.5 11.8 16.9 5.6Itanium 4 core (4 chip) 1600 MHz 4 1 72.5 18.1 34.5 8.6Itanium 8 core (8 chip) 1600 MHz 8 1 134.0 16.8 63.8 8.0Itanium 8 core (4 chip) 1600 MHz 8 1 183.1 22.9 87.2 10.9Itanium 10 core (10 chip) 1600 MHz 10 1 167.0 16.7 79.5 8.0Itanium 12 core (12 chip) 1600 MHz 12 1 200.0 16.7 95.2 7.9Itanium 16 core (16 chip) 1600 MHz 16 1 266.0 16.6 126.7 7.9Itanium 24 core (24 chip) 1600 MHz 24 1 410.0 17.1 195.2 8.1Itanium 28 core (28 chip) 1600 MHz 28 1 482.0 17.2 229.5 8.2Itanium 32 core (32 chip) 1600 MHz 32 1 554.0 17.3 263.8 8.2PA-8800 4 core (4 chip) 875 MHz 4 1 27.0 6.8 12.9 3.2PA-8800 8 core (4 chip) 1000 MHz 8 2 73.2 9.2 34.9 4.4PA-8800 16 core (8 chip) 1000 MHz 16 2 141.0 8.8 67.1 4.2 ==== IBM Candidates ======IBM p6 1 core (0.5 chip) 4700 MHz virtual 1 2 55.9 55.9 26.6 26.6IBM p6 2 core (1 chip) 4700 MHz 2 2 111.7 55.9 53.2 26.6IBM p6 4 core (2 chip) 4700 MHz 4 2 222.6 55.7 106.0 26.5IBM p6 8 core (4 chip) 4700 MHz 8 2 432.6 54.1 206.0 25.8IBM p6 16 core (8 chip) 4700 MHz 16 2 861.0 53.8 410.0 25.6IBM p550 4 core (2 chip) 2100 MHz 4 2 90.0 22.5 42.9 10.7IBM p570 4 core (2 chip) 1900 MHz 4 2 76.3 19.1 36.3 9.1IBM p570 8 core (4 chip) 1900 MHz 8 2 148.9 18.6 70.9 8.9IBM p570 16 core (8 chip) 1900 MHz 16 2 294.0 18.4 140.0 8.8IBM p570 32 core (16 chip) 1900 MHz 32 2 554.0 17.3 263.8 8.2IBM p570 40 core (20 chip) 1900 MHz 40 2 702.3 17.6 334.4 8.4IBM p570 48 core (24 chip) 1900 MHz 48 2 850.5 17.7 405.0 8.4IBM p570 56 core (28 chip) 1900 MHz 56 2 998.8 17.8 475.6 8.5IBM p570 64 core (32 chip) 1900 MHz 64 2 1,147.0 17.9 546.2 8.5IBM p575 8 core (8 chip) 2200 MHz 8 1 200.0 25.0 95.2 11.9IBM p595 64 core (32 chip) 2300 MHz 64 2 1,513.0 23.6 720.5 11.3

Page 12: Capacity Planning June

Calculate from Capacity Translation (2.1)Svc Time DPM DPH CPT2000 CPT2006

0.14 867 52,000 9.8 4.70.11 2,247 134,815 0.0

8 9Total Processor

Chips Platform

1 Intel 2.11 Intel 2.11 Intel 2.11 Intel 2.12 Intel 2.1

1 Intel 2.11 Intel 2.11 Intel 2.11 Intel 2.11 Intel 2.11 Intel 2.11 Intel 2.1

1 Intel 2.12 Intel 2.1

1 Intel 2.12 Intel 2.11 Intel 2.12 Intel 2.11 Intel 2.12 Intel 2.11 Intel 2.12 Intel 2.11 Intel 2.12 Intel 2.11 Intel 2.12 Intel 2.1

1 Intel 2.12 Intel 2.12 Intel 2.11 Intel 2.12 Intel 2.11 Intel 2.12 Intel 2.11 Intel 2.12 Intel 2.1

Page 13: Capacity Planning June

1 Intel 2.12 Intel 2.11 Intel 2.400722 Intel 2.400721 Intel 2.450332 Intel 2.450331 Intel 2.291672 Intel 2.39131 Intel 2.466672 Intel 2.46667

0.5 Intel 2.11 Intel 2.45495

1.5 Intel 3.0032 Intel 2.472191 Intel 2.12 Intel 2.14 Intel 2.1

1 Intel 2.12 Intel 2.167171 Intel 1.893942 Intel 2.247931 Intel 2.12 Intel 2.160981 Intel 2.11 Intel 2.12 Intel 2.212391 Intel 21 Intel 2.12 Intel 2.11 Intel 2.03781 Intel 2.138051 Intel 2.12 Intel 2.23464

1 Intel 1.94182 Intel 2.11 Intel 1.954551 Intel 1.975731 Intel 2.12 Intel 2.1

2 Intel 2.14 Intel 2.18 Intel 2.14 Intel 2.11 Intel 2.12 Intel 2.11 Intel 2.1

Page 14: Capacity Planning June

2 Intel 2.11 Intel 2.12 Intel 2.1

1 Intel 2.12 Intel 2.14 Intel 2.16 Intel 2.18 Intel 2.11 Intel 2.11 Intel 2.11 Intel 1.900582 Intel 2.11 Intel 1.900582 Intel 2.11 Intel 1.922221 Intel 2.12 Intel 2.1

4 AMD 24 AMD 22 AMD 24 AMD 24 AMD 24 AMD 2

1 AMD 22 AMD 22 AMD 24 AMD 2

2 AMD 2.14 AMD 2.11 AMD 2.12 AMD 2.1

1 AMD 2.12 AMD 2.11 AMD 2.12 AMD 2.14 AMD 2.14 AMD 2.14 AMD 2.14 AMD 2.12 AMD 2.14 AMD 2.1

1 AMD 2.12 AMD 2.1

Page 15: Capacity Planning June

4 AMD 2.11 AMD 2.12 AMD 2.14 AMD 2.11 AMD 2.12 AMD 2.14 AMD 2.11 AMD 2.12 AMD 2.1

#DIV/0!2 Intel 2.12 Intel 2.12 Intel 2.12 Intel 2.12 Intel 2.1

#DIV/0!1 Intel 2.12 Intel 2.11 Intel 2.1

0.5 Intel 2.11 Intel 2.12 Intel 2.1

#DIV/0!4 Sun SPARC 2.18 Sun SPARC 2.1

16 Sun SPARC 2.124 Sun SPARC 2.132 Sun SPARC 2.148 Sun SPARC 2.164 Sun SPARC 2.11 Sun SPARC 2.12 Sun SPARC 2.14 Sun SPARC 2.18 Sun SPARC 2.11 Sun SPARC 2.12 Sun SPARC 2.14 Sun SPARC 2.18 Sun SPARC 2.18 Sun SPARC 2.12 Sun SPARC 2.14 Sun SPARC 2.14 Sun SPARC 2.14 Sun SPARC 2.14 Sun SPARC 2.12 Sun SPARC 2.14 Sun SPARC 2.11 Sun SPARC 2.12 Sun SPARC 2.14 Sun SPARC 2.18 Sun SPARC 2.1

Page 16: Capacity Planning June

4 Sun SPARC 2.18 Sun SPARC 2.1

12 Sun SPARC 2.124 Sun SPARC 2.1

#DIV/0!3 Itanium 2.14 Itanium 2.18 Itanium 2.18 Itanium 2.1

10 Itanium 2.112 Itanium 2.116 Itanium 2.124 Itanium 2.128 Itanium 2.132 Itanium 2.14 PARisc 2.14 PARisc 2.18 PARisc 2.1

#DIV/0!0.5 IBM pSeries 2.11 IBM pSeries 2.12 IBM pSeries 2.14 IBM pSeries 2.18 IBM pSeries 2.12 IBM pSeries 2.12 IBM pSeries 2.14 IBM pSeries 2.18 IBM pSeries 2.1

16 IBM pSeries 2.120 IBM pSeries 2.124 IBM pSeries 2.128 IBM pSeries 2.132 IBM pSeries 2.18 IBM pSeries 2.1

32 IBM pSeries 2.1

Page 17: Capacity Planning June

Software Min DEFAULT Traffic Flow AnalysisServer Think Peak Users .................................. 100.0 2.78 Users Client Traffic DPM 6 Productivity ................................... 0.17 DPM 6.00 DPM DB Traffic

17 6 Throughput ................................................. 16.70 DPM 600 TPH System Capacity =Favorites Platform Service time

Software Nodes Test Platform Utilization TestNetwork 1000.0 Mbps 0.1% 0.002

Client 1 Dell Xeon X5450 4 core (1 chip) 3000(12) MHz 0.000

WTS 1 Dell Xeon X5450 4 core (1 chip) 3000(12) MHz 0.000WAS 1 Dell Xeon X5450 4 core (1 chip) 3000(12) MHz 0.000ADF 1 Dell Xeon X5450 4 core (1 chip) 3000(12) MHz 0.000SOC 1 Dell Xeon X5450 2 core (0.5 chip) 3000(12) MHz 50.0% 3.593SDE 1 Dell Xeon X5450 4 core (1 chip) 3000(12) MHz 0.000DB 1 Dell Xeon X5450 4 core (1 chip) 3000(12) MHz 0.000

Workflow Software Service TimesSelect Category Workflow Arc08 baseline

Wkflow Name Workflow as Required Client Design Model MetricsPlatform Chatter (Copy/Insert new workflows as needed) Traffic Desktop

10 Test Workflow Mbpd Client

Server 10 Test Workflow 2.000 Mbpd 0.000

C7
dave: Network Bandwidth
Page 18: Capacity Planning June

Server Productivity

Desktop 10Server 6WTS

Page 19: Capacity Planning June

Traffic Flow Analysis 2.000 Mbpd Qfac NQfac

2.000 Mbpd Response 5.99 sec Max DPM/c 0.334 1 1

5.000 Mbpd Think time 353.3 Blink 6.000System Capacity = 2,004 TPH 1,002 TPH

Platform Service time 33 DPM 2Arc08 Capacity Qtime Core SRint20060.002 1000.0 Mbps 0.000 10.000 0.000 Cores = 4 54.5 13.6/Core

0.000 0.000 Cores = 4 54.5 13.6/Core

0.000 0.000 Cores = 4 54.5 13.6/Core

0.000 0.000 Cores = 4 54.5 13.6/Core

2.797 33 DPM 2.395 Cores = 2 27.3 13.6/Core Service time = function (throughput, utilization)0.000 0.000 Cores = 4 54.5 13.6/Core Queue time = function (service time, utilization)0.000 0.000 Cores = 4 54.5 13.6/Core Response time = function (service time, queue time)

Software Service TimesArc08 baseline SRint06/core = 17.5 Min

Design Model Metrics Server Think

Desktop ADF SOC SDE Traffic DBMS Time

WTS WAS ADF SOC SDE Mbpd Data sec

0.000 0.000 0.000 2.797 0.000 5.000 Mbpd 0.000 6

G1
Client Mbpd Use ESRI Standard values if no traffic measurements are available.
Page 20: Capacity Planning June

Software Blink

Desktop 10Server 1

0.10.01

DEFAULT

ADJUST

SAVE

Page 21: Capacity Planning June

0.000

0.000

0.000

0.000

Service time = function (throughput, utilization) 3.593Queue time = function (service time, utilization) 0.000Response time = function (service time, queue time) 0.000

DEFAULT FavoritesTotal ADJUST Standard2.797 SAVE

Softw

are

Serv

ice

Tim

e

Page 22: Capacity Planning June

Workflow

Wkf

low

Cha

tter Software Component Service Times Workflow Description

Select Category Workflow Arc08 baseline SRint06/core = 17.5 Min

Name Workflow as Required Client Design Model Metrics Database Think

(Copy/Insert customer workflows below) Platform Traffic Client Web SOC Data Traffic Data Time

Customer Workflows ============= Mbpd Client WTS/RDS WAS ADF SOC SDE Mbpd DBMS sec TotalStandard ESRI Workflows ========= Mbpd Client WTS/RDS WAS ADF SOC SDE Mbpd DBMS sec Total

ArcGIS Desktop ========= Mbpd Client WTS/RDS WAS ADF SOC SDE Mbpd DBMS sec Total1a_ArcGIS Desktop Light Dynamic Desktop 200 5.000 0.400 0.048 5.000 0.048 3 0.496 ArcGIS Desktop light workstation clients - based on historical capacity planning performance targets1b_ArcGIS Desktop Medium Dynamic Desktop 200 5.000 0.800 0.096 10.000 0.096 3 0.992 ArcGIS Desktop medium workstation clients - based on performance feedback from ArcGIS Desktop users (higher quality map displays)2a_ArcGIS WTS/Citrix (vector) Light Dynamic WTS 10 0.280 0.100 0.400 0.048 5.000 0.048 3 0.596 ArcGIS Desktop light WTS clients with vector data source - based on historical capacity planning performance targets2b_ArcGIS WTS/Citrix (vector) Medium Dynamic WTS 10 0.280 0.100 0.800 0.096 10.000 0.096 3 1.092 ArcGIS Desktop medium WTS clients with vector data source - based on performance feedback from ArcGIS Desktop users (higher quality map displays)2c_ArcGIS WTS/Citrix (w/image) WTS 10 1.000 0.100 0.400 0.048 5.000 0.048 3 0.596 ArcGIS Desktop light WTS clients with vector and image data source - based on historical capacity planning performance targets (light displays)ArcGIS Server Applications ========= Mbpd Client WTS/RDS WAS ADF SOC SDE Mbpd DBMS sec Total3a_AGS9.3 ADF Light Dynamic Server 10 2.000 0.100 0.096 0.288 0.048 5.000 0.048 3 0.580 ArcGIS Server Map Viewer with light dynamic data source - based on initial testing with ArcIMS Web mapping services3b_AGS9.3 ADF Medium Dynamic Server 10 2.000 0.100 0.192 0.576 0.096 5.000 0.096 3 1.060 ArcGIS Server Map Viewer with medium dynamic data source - based on performance feedback from ArcGIS Server customers (higher quality maps)3c_AGS9.3 ADF light dynamic/map cache mix Server 10 1.000 0.100 0.043 0.144 0.024 2.000 0.024 3 0.335 ArcGIS Server Map Viewer with 1-5 dynamic layers and map cache base layer mix - dynamic layers <50% of light dynamic data source.3d_AGS9.3 ADF Light Fully Cache Server 10 0.500 0.100 0.080 0.008 0.001 2.000 0.001 3 0.190 ArcGIS Server Map Viewer light with fully cached data sourceArcGIS Server Services ========= Mbpd Client WTS/RDS WAS ADF SOC SDE Mbpd DBMS sec Total3e_AGS9.3 ADF Light Map Service Server 10 2.000 0.100 0.029 0.288 0.048 5.000 0.048 3 0.513 ArcGIS Server Map light SOAP service with Web connection3f_AGS9.3 REST Light Dynamic Service Server 10 2.000 0.100 0.029 0.288 0.048 5.000 0.048 3 0.513 ArcGIS Server Map light REST service with Web connection3g_AGS9.3 REST light dynamic/map cache mix Server 10 1.000 0.100 0.014 0.144 0.024 2.000 0.024 3 0.306 ArcGIS Server Map REST client with 1-5 dynamic layers and map cache base layer mix - dynamic layers <50% of light dynamic data source.Image Services ========= Mbpd Client WTS/RDS WAS ADF SOC SDE Mbpd DBMS sec Total

4a_AGS Image Extension Light Server 10 2.000 0.100 0.196 5.000 3 0.296 Preliminary ArcGIS Server Image Extension service with simple image processing

4b_AGS Image Extention Medium Server 10 2.000 0.100 0.372 5.000 3 0.472 Preliminary ArcGIS Server Image Extension service with medium image processing

5a_AGS Explorer 500 Client Server 10 0.400 3 0.400 ArcGIS Explorer Client (local map cache data source)5b_AGS9.3 Globe Server Server 10 2.000 0.001 0.008 0.000 2.000 0.000 3 0.009 ArcGIS Server image streaming supporting desktop clients from Globe cache.

Mobile ADF Services ========= Mbpd Client WTS/RDS WAS ADF SOC SDE Mbpd DBMS sec Total6a_AGS9.3 Mobile ADF Client Server 10 0.400 3 0.400 ArcGIS Mobile ADF Client

6b_AGS9.3 Mobile ADF Service Server 10 0.050 0.080 0.080 0.012 0.700 0.012 3 0.184 ArcGIS Mobile ADF integration services with light vector layer synchronization.Legacy Software ========= Mbpd Client WTS/RDS WAS ADF SOC SDE Mbpd DBMS sec Total10a_IMS ArcMap Server (Web App) Server 10 1.000 0.100 0.080 0.240 0.048 5.000 0.048 3 0.516 Legacy ArcIMS ArcMap Web application (MXD)10b_IMS ArcMap Server (Web Service) Server 10 1.000 0.100 0.024 0.240 0.048 5.000 0.048 3 0.460 Legacy ArcIMS ArcMap Web service (MXD)11a_IMS Image Server (Web App) Server 10 1.000 0.100 0.040 0.120 0.048 5.000 0.048 3 0.356 Legacy ArcIMS Image Web application (AXL)11b_IMS Image Server (Web Service) Server 10 1.000 0.100 0.012 0.120 0.048 5.000 0.048 3 0.328 Legacy ArcIMS Image Web service (AXL)Batch Process Workflows ===== Mbpd Client WTS/RDS WAS ADF SOC SDE Mbpd DBMS sec TotalBatch 1_AGS9.3 ADF Batch Process Server 10 2.000 0.100 0.192 0.576 0.096 5.000 0.096 0 1.060 ArcGIS Server Batch Process (used as target architecture for batch reconcile and post process)

Portland Workflows ============== Mbpd Client WTS/RDS WAS ADF SOC SDE Mbpd DBMS Total

DeskAI_ArcGIS Desktop Desktop 200 5.000 0.400 0.048 5.000 0.048 3 0.496 ArcGIS Desktop standard workstation clients - based on historical capacity planning performance targetsDeskAE_ArcGIS Desktop Desktop 200 5.000 0.400 0.048 5.000 0.048 3 0.496 ArcGIS Desktop standard workstation clients - based on historical capacity planning performance targetsDeskAV_ArcGIS Desktop Desktop 200 5.000 0.400 0.048 5.000 0.048 3 0.496 ArcGIS Desktop standard workstation clients - based on historical capacity planning performance targetsRemoteAI_ArcGIS WTS/Citrix (vector) WTS 10 0.280 0.100 0.400 0.048 5.000 0.048 3 0.596 ArcGIS Desktop standard WTS clients with vector data source - based on historical capacity planning performance targetsRemoteAE_ArcGIS WTS/Citrix (vector) WTS 10 0.280 0.100 0.400 0.048 5.000 0.048 3 0.596 ArcGIS Desktop standard WTS clients with vector data source - based on historical capacity planning performance targetsRemoteAV_ArcGIS WTS/Citrix (vector) WTS 10 0.280 0.100 0.400 0.048 5.000 0.048 3 0.596 ArcGIS Desktop standard WTS clients with vector data source - based on historical capacity planning performance targetsWebMap_AGS9.3 ADF Light Dynamic Server 10 2.000 0.100 0.096 0.288 0.048 5.000 0.048 3 0.580 ArcGIS Server Map Viewer with light dynamic data source - based on initial testing with ArcIMS Web mapping services

Test Workflow ====== Mbpd Client WTS/RDS WAS ADF SOC SDE Mbpd DBMS Total

Target Workflow Server 10 2.000 0.000 0.000 0.000 0.000 2.797 0.000 5.000 0.000 6 2.797 Target workflow from test tabTest Workflow Server 10 2.000 0.000 0.000 0.000 0.000 2.797 0.000 5.000 0.000 6 2.797 Workflow generated from test tab

Softw

are

Serv

ice

Tim

e

Page 23: Capacity Planning June

Workflow TabWorkflow Favorites

(Copy/Insert new workflows as needed)

ArcGIS Server Standard Workflows =============1a_ArcGIS Desktop Light Dynamic2a_ArcGIS WTS/Citrix (vector) Light Dynamic2c_ArcGIS WTS/Citrix (w/image)ArcGIS Server Applications =============3a_AGS9.3 ADF Light Dynamic3b_AGS9.3 ADF Medium Dynamic3d_AGS9.3 ADF light Fully CacheArcGIS Server Mapping Services ===============3e_AGS9.3 ADF light Map Service3f_AGS9.3 REST Light Dynamic ServiceArcGIS Server Image Services ================4a_AGS Image Extension Light4b_AGS Image Extention MediumArcGIS Server Mobile =================6a_AGS9.3 Mobile ADF Client

** Hardware Platform Favorites **CY2008 Desktop Platforms ===Xeon MP 2 core (2 chip) 2000 MHz

CY2003 Platform =====Xeon MP 2 core (2 chip) 2000 MHz

CY2004 - CY2005 Platform ===============Xeon 2 core (2 chip) 3200 MHz

CY2006 Platforms ===============Xeon 2 core (2 chip) 3800 MHz

CY2007 Dual Core Platforms ===============Xeon E5440 8 core (2 chip) 2833(12) MHzXeon X5450 4 core (1 chip) 3000(12) MHzXeon 5160 2 core (1 chip) 3000(4) MHzXeon 5160 4 core (2 chip) 3000(4) MHz

CY2008 Dual Core Platforms =================Xeon X5260 2 core (1 chip) 3333(6) MHzXeon X5260 4 core (2 chip) 3333(6) MHz

CY2008 Quad Core Platforms =================Xeon X5460 4 core (1 chip) 3166(12) MHzXeon X5460 8 core (2 chip) 3166(12) MHzXeon X5470 4 core (1 chip) 3333(12) MHzXeon X5470 8 core (2 chip) 3333(12) MHz

CY2009 Quad Core Platforms =================Xeon X5560 4 core (1 chip) 2800 MHzXeon X5560 8 core (2 chip) 2800 MHzXeon X5570 4 core (1 chip) 2933 MHzXeon X5570 8 core (2 chip) 2933 MHz

CY2008 Test Platforms =================

Page 24: Capacity Planning June

HP Workstation 4100 1 core (1 chip) 2800 MHz HP DL380 Intel Xeon 2 core (2 chip) 3200 MHz Dell Xeon 1 core (1 chip) 3400 MHz HP Workstation 4100 1 core (1 chip) 2800 MHz Dell Xeon X5450 2 core (0.5 chip) 3000(12) MHzDell Xeon X5450 4 core (1 chip) 3000(12) MHzDell Xeon X5450 8 core (2 chip) 3000(12) MHz

Page 25: Capacity Planning June

LookupsWorkflow TabLab Tab

Page 26: Capacity Planning June

Workflow Description

ArcGIS Desktop light workstation clients - based on historical capacity planning performance targets

ArcGIS Desktop light WTS clients with vector data source - based on historical capacity planning performance targets

ArcGIS Desktop light WTS clients with vector and image data source - based on historical capacity planning performance targets (light displays)

ArcGIS Server Map Viewer with light dynamic data source - based on initial testing with ArcIMS Web mapping services

ArcGIS Server Map Viewer with medium dynamic data source - based on performance feedback from ArcGIS Server customers (higher quality maps)

ArcGIS Server Map Viewer light with fully cached data source

ArcGIS Server Map light SOAP service with Web connection

ArcGIS Server Map light REST service with Web connection

Preliminary ArcGIS Server Image Extension service with simple image processing

Preliminary ArcGIS Server Image Extension service with medium image processing

ArcGIS Mobile ADF Client

Platform Specifications: System, SPECrate_int2006 Performance

Platform Specifications: Intel, Throughput = 8.7, Baseline/Core = 4.4

Platform Specifications: Intel, Throughput = 8.7, Baseline/Core = 4.4

Platform Specifications: Intel, Throughput = 17.7, Baseline/Core = 8.8

Platform Specifications: Intel, Throughput = 20.4, Baseline/Core = 10.2

Platform Specifications: Intel, Throughput = 107, Baseline/Core = 13.4Platform Specifications: Intel, Throughput = 58.3, Baseline/Core = 14.6Platform Specifications: Intel, Throughput = 28.6, Baseline/Core = 14.3Platform Specifications: Intel, Throughput = 53.7, Baseline/Core = 13.4

Platform Specifications: Intel, Throughput = 37.1, Baseline/Core = 18.6Platform Specifications: Intel, Throughput = 70.1, Baseline/Core = 17.5

Platform Specifications: Intel, Throughput = 62.1, Baseline/Core = 15.5Platform Specifications: Intel, Throughput = 114, Baseline/Core = 14.3Platform Specifications: Intel, Throughput = 81.6, Baseline/Core = 20.4Platform Specifications: Intel, Throughput = 139, Baseline/Core = 17.4

Platform Specifications: Intel, Throughput = 114, Baseline/Core = 28.5Platform Specifications: Intel, Throughput = 228, Baseline/Core = 28.5Platform Specifications: Intel, Throughput = 116, Baseline/Core = 29Platform Specifications: Intel, Throughput = 232, Baseline/Core = 29

Page 27: Capacity Planning June

Platform Specifications: Intel, Throughput = 5.9, Baseline/Core = 5.9Platform Specifications: Intel, Throughput = 13.4, Baseline/Core = 6.7Platform Specifications: Intel, Throughput = 9.9, Baseline/Core = 9.9Platform Specifications: Intel, Throughput = 5.9, Baseline/Core = 5.9Platform Specifications: Intel, Throughput = 27.3, Baseline/Core = 13.6Platform Specifications: Intel, Throughput = 54.5, Baseline/Core = 13.6Platform Specifications: Intel, Throughput = 109, Baseline/Core = 13.6

Page 28: Capacity Planning June

ArcGIS Desktop light WTS clients with vector and image data source - based on historical capacity planning performance targets (light displays)

ArcGIS Server Map Viewer with medium dynamic data source - based on performance feedback from ArcGIS Server customers (higher quality maps)

Page 29: Capacity Planning June

Requirements Analysis Live WEB TPH = WEB Users Bandwidth NW Wkflow RESET Blink User Qfac NQfac (F9 key) AST WTS EAS WAS SCM IS DBMS

Workflow User Environment 7,200 20 Mbps %Cap Chatter DEFAULT 10 Resp 1 1 Adjusted Platform and Network service times Keep this section open to support platform utilization profile

Labels Types of Workflows Peak Concurrent DPM/TPM Network 9 Traffic Latency Think Time Time Max DPM Desktop Network Desktop Web Servers Server Geodatabase

<Number> Standard Workflows Users Service Client Total Mbps Data {TPH} Mbpd msec Minimum Calc sec NW Utilization Client Latency NWQ Xport WTSQ WTS EASQ EAS WASQ WAS SCMQ SCM ISQ IS DBMSQ DBMS Network Labeling NW Chatter Bandwidth CA Think Labels Data Col(i)

LAN LAN_Local Clients 20 Clients LAN = 10.8 Mbps 1000 1% 1.1% (F9 key) 1 1000 1.1% LAN 0 1,0001.0.1 1a_ArcGIS Desktop Light Dynamic 10 10.00 100 8.333 DB {6,000} 5.000 200 3 5.5 0.47 17.3 10.731 0.409 0.000 0.005 0.004 0.048 10 1 0 1 1.1% 1000 0 10 3 1a DB 6,0001.0.2 2a_ArcGIS WTS/Citrix (vector) Light Dynamic 10 10.00 100 0.467 DB {6,000} 0.280 10 3 5.4 0.62 16.6 10.656 0.091 0.000 0.000 0.028 0.451 0.004 0.048 10 1 0 2 1.1% 1000 0 10 3 2a DB 6,0001.0.3 3b_AGS9.3 ADF Medium Dynamic 10 6.00 60 2.000 DB {3,600} 2.000 10 3 8.6 1.36 13.8 6.7754 0.091 0.000 0.002 0.302 0.863 0.009 0.096 10 1 0 3 1.1% 1000 0 6 3 3b DB 3,600WAN WAN_Clients 30 Clients WAN = 11.3 Mbps 1000 1% 1.1% 2 1000 1.1% WAN 0 1,000

R1 R1_Remote Site 1 30 Clients Traffic = 11.3 Mbps 45 25% 25.2% 101 45 25.2% 0 0 R1 0 452.1.4 1a_ArcGIS Desktop Light Dynamic 10 10.00 100 8.333 DB {6,000} 5.000 200 3 5.4 0.60 16.7 10.665 0.409 0.030 0.111 0.004 0.048 11 2 1 4 25.2% 1000 45 10 3 1a DB 6,0002.1.5 3e_AGS9.3 ADF Light Map Service 10 6.00 60 2.000 DB {3,600} 2.000 10 3 9.3 0.69 16.3 7.0252 0.091 0.012 0.044 0.127 0.364 0.004 0.048 11 2 1 5 25.2% 1000 45 6 3 3e DB 3,6002.1.6 3g_AGS9.3 REST light dynamic/map cache mix 10 6.00 60 1.000 FG {3,600} 1.000 10 3 9.6 0.40 17.7 7.1659 0.091 0.006 0.022 0.072 0.206 11 2 1 6 25.2% 1000 45 6 3 3g FG 3,600

Internet Internet_Clients 20 Clients Internet = 4.0 Mbps 1000 0% 0.4% 3 1000 0.4% Internet 0 1,000

R2 R2_Remote Site 2 20 Clients Traffic = 4.0 Mbps 18 22% 22.2% 102 18 22.2% 0 0 R2 0 183.2.7 3f_AGS9.3 REST Light Dynamic Service 10 6.00 60 2.000 DB {3,600} 2.000 10 3 9.2 0.77 15.9 6.9905 0.091 0.026 0.111 0.127 0.364 0.004 0.048 12 3 2 7 22.2% 1000 18 6 3 3f DB 3,6003.2.8 5a_AGS Explorer 500 Client 10 6.00 60 0.000 DB {3,600} 0.000 10 3 9.6 0.37 17.8 7.1828 0.366 12 3 2 8 22.2% 1000 18 6 3 5a DB 3,6003.2.9 5b_AGS9.3 Globe Server 10 6.00 60 2.000 DB {3,600} 2.000 10 3 9.9 0.15 19.1 7.3052 0.026 0.111 0.003 0.009 12 3 2 9 22.2% 1000 18 6 3 5b DB 3,600

0 0 Clients Traffic = 0.0 Mbps 1000 0% 0 0.0% 103 1000 0.0% 0 0 0 1,00090 Total Clients 70 20

Client Intel Core 2 Duo 2 core (1 chip) 3166 MHz Intel Cores = 2 38.3 19.2/Core 0.0 0.6 1.8 0.0 0.0 0.0 0.3

Standard AGS License 4 Core Default Availability = Minimum Mbps Traffic

10 Clients WTS: Windows Terminal Server Intel Arc08 = 0.448 sec SRint2006 100% Max 1000 0.5 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%

10 DPM/client Xeon X5470 8 core (2 chip) 3333(12) MHz 40 GB RAM Cores = 8 Chips = 2 17.4/Core 139.0 Fix Nodes NIC Mbps 9.4% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%

100 DPM CPU Utilization ...................................................... 9.4% CPU 106.4 / Node 0.451 sec 1,064 DPM 1 Node 1,064 DPM 1000 8.3 1 0 0 0 0 0 0 0 0 0 0

Open WTS: Windows Terminal Server 9.4% 10 Users 63,827 TPH 1 Node 63,827 TPH Minimum DBMS DBMS

Open EAS: Enterprise Application Server 0.0% 0 Users 0 TPH 0 Node 0 TPH Minimum Client Client

Open WAS: Web Application Server 0.0% 0 Users 0 TPH 0 Node 0 TPH Minimum Client Client

50 Clients SCM: SOC Machine Intel Arc08 = 0.362 sec SRint2006 80% Max 1000 9.0 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%

6 DPM/client Xeon X5260 4 core (2 chip) 3333(6) MHz 8 GB RAM Cores = 4 Chips = 2 17.5/Core 70.1 Fix Nodes NIC Mbps 45.2% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%

300 DPM CPU Utilization .................................................... 45.2% CPU 88.6 / Node 0.361 sec 664 DPM 1 Node 664 DPM 1000 23.0 1 0 0 0 0 0 0 0 0 0 0

Open SCM: SOC Machine 45.2% 50 Users 39,858 TPH 1 Node 39,858 TPH Minimum DBMS DBMS

Open IS: Image Extention 0.0% 0 Users 0 TPH 0 Node 0 TPH Minimum DBMS DBMS

60 Clients DBMS: Geodatabase Server Intel Arc08 = 0.054 sec SRint2006 80% Max 1000 40.0 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%

8 DPM/client Xeon X5260 4 core (2 chip) 3333(6) MHz 20 GB RAM Cores = 4 Chips = 2 17.5/Core 70.1 Fix Nodes NIC Mbps 10.8% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%

480 DPM CPU Utilization .................................................... 10.8% CPU 445.1 / Node 0.054 sec 4,451 DPM 1 Node 4,451 DPM 1000 1 0 0 0 0 0 0 0 0 0 0

Open DBMS: Geodatabase Server 10.8% 60 Users ### 1 Node 267,048 TPH Minimum 0

File Data Share 1000 8.0

Configarray Performance Factors Blink Bandwidth

<Number> Data Source Application SDE/DB Traffic 10 9

<Name> DB_Ora ST_Geo 1 1 1 1 10000

DB_Ora LOB 1 1 1 0.1 1000 Platform Utilization Profile

DB_Ora SDO_Geo 1 2 2 0.01 100 9.4%

DB_Ora Long Raw 1 1 1 0.001 10

DB_SQL SDE Binary 1 1 1 0.0001 512

Favorite DB_SQL Geography 1 1 1 256

Standard DB_SQL Geometry 1 1 1 DEFAULT 90DB_PgreSQL ST_Geo 1 1 1 ADJUST 45 0.0%

DB_PgreSQL PostGIS 1 1 1 SAVE 24 0 0.0%

DB_IBM DB2 1 1 1 18 0 45.2%

DB_IBM Informix 1 1 1 12 0.0%6 10.8%

SF_Small Shape File 1.0 10.0 3 WTS EAS WAS SCM IS DBMS

SF_Large Shape File 4.0 20.0 1.5 1 0 0 1 0 1

FG_Small File GDB 1.0 2.0 0.750FG_Large File GDB 2.0 3.0 0.256IM_Image File 1.0 1.0 0.128

0.056Standard Workflows 0.028Favorite Workflows Minimum

High AvailServers

Default Default tier Live

Client Client Test

WTS WTS: Windows Terminal ServerAS1 AS1: Application Server Tier 1AS2 AS2: Application Server Tier 2AS3 AS3: Application Server Tier 3AS4 AS4: Application Server Tier 4EAS EAS: Enterprise Application ServerWAS WAS: Web Application ServerSCM SCM: SOC Machine

IS IS: Image ExtentionDBMS DBMS: Geodatabase Server

0.0

0.5

1.0

1.5

2.0

2.5

3.0Workflow Performance Summary

Client Latency

NWQ Xport

WTSQ WTS

EASQ EAS

WASQ WAS

SCMQ SCM

ISQ IS

DBMSQ DBMS

Per

form

ance

(sec

)

F1
Productivity Live max productivity - 10 DPM WTS & Desktop - 6 DPM Server Test productivity based on minimum think time
AQ2
RESET Do not leave in AJUST mode.
AR2
Blink Select from large to smaller values during max loads.
E24
Core per tier
F24
Chips per tier
D25
Peak Users/Node Reduced by rollover percentage
E52
Core per tier
F52
Chips per tier
D53
Peak Users/Node Reduced by rollover percentage
E60
Core per tier
F60
Chips per tier
D61
Peak Users/Node Reduced by rollover percentage