17
HP-UX 11i v3 Congestion Control Management for Storage March 2010 Technical white paper Table of contents Abstract ........................................................................................................................................ 2 Overview ...................................................................................................................................... 2 New scsimgr Attributes .................................................................................................................... 2 Introduction to Queue Depth ............................................................................................................. 3 LUN Based Congestion Management ............................................................................................. 3 Target Port Based Congestion Management..................................................................................... 4 Dynamic Target Port Based Congestion Management ........................................................................ 5 Test Cases ..................................................................................................................................... 5 LUN Based Queue Management (Current HP-UX Default) ................................................................... 5 Target Port Based Queue Depth Enabled......................................................................................... 7 Dynamic Target Port Based Queue Depth Enabled (Clusters) ............................................................... 9 Additional Information................................................................................................................... 11 Software Requirements ............................................................................................................... 11 Additional Graphs .................................................................................................................... 12 Configuring scsimgr Scope ......................................................................................................... 14 Disabling Congestion Control Management ................................................................................... 15 Default Target Port Queue Sizes .................................................................................................. 15 Glossary ..................................................................................................................................... 16 For More Information .................................................................................................................... 17 Call to Action............................................................................................................................... 17

HP-UX 11i v3 Congestion Control Management for Storage

Embed Size (px)

DESCRIPTION

HP-UX 11i v3 Congestion ControlManagement for Storage

Citation preview

Page 1: HP-UX 11i v3 Congestion Control Management for Storage

HP-UX 11i v3 Congestion Control Management for Storage

March 2010

Technical white paper

Table of contents

Abstract ........................................................................................................................................ 2

Overview...................................................................................................................................... 2

New scsimgr Attributes.................................................................................................................... 2

Introduction to Queue Depth............................................................................................................. 3LUN Based Congestion Management ............................................................................................. 3Target Port Based Congestion Management..................................................................................... 4Dynamic Target Port Based Congestion Management........................................................................ 5

Test Cases..................................................................................................................................... 5LUN Based Queue Management (Current HP-UX Default) ................................................................... 5Target Port Based Queue Depth Enabled......................................................................................... 7Dynamic Target Port Based Queue Depth Enabled (Clusters)............................................................... 9

Additional Information................................................................................................................... 11Software Requirements............................................................................................................... 11Additional Graphs .................................................................................................................... 12Configuring scsimgr Scope ......................................................................................................... 14Disabling Congestion Control Management................................................................................... 15Default Target Port Queue Sizes .................................................................................................. 15

Glossary ..................................................................................................................................... 16

For More Information .................................................................................................................... 17

Call to Action............................................................................................................................... 17

Page 2: HP-UX 11i v3 Congestion Control Management for Storage

2

AbstractStorage Area Networks (SANs) have been successfully deployed in Enterprise Data Centers for a number of years. SANs allow servers and storage systems to share common interconnect fabrics.

You can conveniently add and remove servers or storage systems without affecting the rest of the fabric. This ease and transparency does have its issues. One issue is that it is easy to create a configuration that does not give optimal performance. The symptom might be as mild as increased access times to more severe device failures or application failures.

One cause is linked to the overloading of the storage systems. Typically, a storage system can handle several hundred to several thousand requests per second. If it cannot satisfy the request immediately, it holds the request until it is able to do so. Reasons for not being able to satisfy a request immediately include waiting for multiple writes to complete, waiting for a heavily loaded LUN, or waiting for a command to finish. If multiple servers are accessing the same array controller, the problem is compounded.

To solve the problem of an overloaded storage system, vendors employ various algorithms and buffering schemes. However, because the storage system is not able to control the servers, these methods only work well for simple configurations.

This paper describes how to move these issues to the server. Servers are in a better position to know, predict, and control the rate of requests to the storage system. This is especially true in clustered configurations where the member servers can communicate with each other over a cluster interconnect in order to synchronize access to the storage systems. This proposed scheme represents a paradigm shift in terms of how storage system load is managed on today’s servers. The overall benefit is simpler manageability, increased utilization, and reduced cost to customers.

OverviewSharing storage systems between several servers through one target port can result in excessive congestion as the single target port creates a bottleneck. This results in reduced performance and increased latency as the server might have to retry due to overflowing the target port queue.

This paper describes a new queue depth management method called Congestion Control Management. This is a significant storage stack improvement compared to the original release of HP-UX 11i v3. This paper examines use case opportunities in which the HP-UX 11i v3 storage stack can deliver a very rich customer experience in standalone, shared storage, and clustered environments working with the HP Serviceguard clustering product.

For software requirements and dependencies, see Software Requirements.

New scsimgr AttributesThe scsimgr command has been enhanced with the following new attributes to enable the new features:

• tport_qdepth_enable Enables Congestion Control Management.

• tport_qd_dyn Enables dynamic target port queue depth resizing when Congestion Control Management is enabled.

• tport_max_qdepth Specifies the maximum target port queue depth on a server when Congestion Control Management is enabled.

Page 3: HP-UX 11i v3 Congestion Control Management for Storage

3

Introduction to Queue DepthThe number of requests that are being held while waiting for the storage system is referred to as the Queue Depth. This depth is managed differently between different vendors and different models of storage systems. If the queue depth on the servers is too high, performance is affected because the number of requests coming from the servers is higher than the queue depth of the storage system. This is because more requests are sent to the storage system than it can handle. For HP-UX, this results in a retry of the request after a two-second delay. This causes a cascade where one retry creates another,resulting in congestion failure. In this case, all servers connected to this storage system experience major performance impacts. Poorly performing servers causes application productivity to decrease,resulting in lost revenue for the customer.

If the queue is set too low, the number of requests sent to the storage system is below what it can handle. Although this does not cause queue overflows, the storage system is underutilized. This can be very expensive because the customer might buy more hardware than is needed.

LUN Based Congestion ManagementThe HP-UX 11i v3 server controls the queuing on a per–LUN basis. It does not have a concept of queuing based on the storage system’s port queuing. The two methods are not compatible. Figure 1shows what happens when multiple LUNs have many requests that go to one port on the storage system even when the individual LUN queues are not exceeded.

Figure 1. LUN-based Queue Management Behavior

SAN

Server A

LUN q

LUN q

LUN q

Port q

A

B

C

Server B

LUN q

LUN q

LUN qD

E

F

SAN

In this case, there are 24 requests, but the port queue depth is 10. Therefore, 14 requests will be dropped (depicted by the trashcan). When the storage system port queue overflows, the requests are dropped and a SCSI check condition is returned to the server with a status of S_QFULL. This causesHP-UX to resend the request and increment the queue_overflow statistic. The servers and storage systems work much harder as the data must be processed many times, even up to the maximum max_retries (default is 45) set for the LUN. If this occurs, the LUN returns an error to the application of EIO or EBUSY.

Currently, administrators have two solutions to solve the problem:

Page 4: HP-UX 11i v3 Congestion Control Management for Storage

4

• Overbuy the storage system with port queue space higher than what is anticipated from the server or servers. In the previous example, the minimum port queue is 24. This might give the best performance and is the easiest to configure, but might be very expensive.

• Decrease the size of the LUN queues in the server so that if all of the LUNs had requests at the same time the storage system’s port queue does not overflow. In the previous example, the LUN queues must be set to one. While this might work, it is an under utilization of the storage system and mightcause poor performance.

Target Port Based Congestion ManagementConceptually, target port congestion management attempts to manage the number of requests going to the storage system in the most optimal way. The storage system has one or more connections to the server –either through a switch (Fibre Channel) or directly (SAS, Parallel SCSI, USB, or SATA). The connection on the server is referred to as an initiator port; the connection on the storage system is referred to as a target port (TPort). Most storage systems have buffer space (queue space) allocated to each TPort. There might be some buffering (queuing) for each LUN, but typically most of the queuing is managed on a per-port basis. Therefore, we want to manage the requests at the TPort.

The new scheme does not create a new queue in the server. Instead, the server has a reference to the storage system’s queue depth. This is configured by the administrator with the scsimgr command.

# scsimgr [set,save]_attr -H tgtpath \-a tport_max_qdepth=depth \-a tport_qd_dyn=0 -a tport_qdepth_enable=1Figure 2 shows the relationship between the configured Tport Queue Depth in the server with respect to the storage systems target queue (TPort queue).

Figure 2. Server target port queue depth referring to the storage system TPort queue

SAN

Server A

LUN q

LUN q

LUN q

TPort queue 10

A

B

C

Server B

LUN q

LUN q

LUN qD

E

F

SAN

TPortqueue depth 5

TPortqueue depth 5

In this diagram, the TPort queue depth is 5 as the TPort queue on the storage system is 10 and there are two servers. The TPort queue depth is a counter for the server to keep track of how many requests are outstanding at the time. When the server tries to send another request, it first verifies the TPort queue depth count. If there is space available, the request is sent to the storage system. If not, the request is held on the LUN until space is available. With this configuration, the server does not overflow the TPort.

Page 5: HP-UX 11i v3 Congestion Control Management for Storage

5

Dynamic Target Port Based Congestion ManagementDynamic target port based congestion management adds the element of rebalancing requests between multiple HBAs either in one system or between servers in a clustered environment. This works by having the server periodically scan all of its TPort queue depth counters, add them up, take an average, and match them against the tport_max_qdepth set by the administrator. The original internal value set on each TPort queue depth is the value of tport_max_qdepth divided by the number of HBAs. If the average is higher than the current load of a TPort queue depth, its depth is lowered to match its load. If the average is lower than the current load, its depth is raised to match its load. The total between the HBAs will not exceed tport_max_qdepth. Figure 3 shows a clustered set of servers. In this example, there is Server A with a load of eight and Server B with a load of two. The TPort queue on the storage system is set to 10. Originally, both servers had their TPort queue depth set to five.

The same idea also works in a single server.

Figure 3. Dynamic TPort Based Congestion Management

SAN

Server A

LUN q

LUN q

LUN q

TPort queue 10

A

B

C

Server B

LUN q

LUN q

LUN qD

E

F

SAN

TPortqueue depth 5à2

TPortqueue depth 5à8

Cluster interconnect

The advantage of adding the dynamic element is that the servers will try to balance themselves without the administrator’s help. This allows a server that is heavily loaded to be able to borrow some of the other server’s queue depth. This keeps the storage system running at its peak, which allows all of the servers to have good response time and performance no matter where the load is.

Test Cases

LUN Based Queue Management (Current HP-UX Default)Figure 4 shows several servers using one storage system. The servers are not sharing the LUNs.

Page 6: HP-UX 11i v3 Congestion Control Management for Storage

6

Figure 4. Test system configuration

SANTPort

Server A

Server B

Server D

Server C

XP12000Storage Array

Using the configuration in Figure 4, a read-only test using a disk I/O test program ran with the following parameters:

• scsimgr attribute max_retries set to 4096• scsimgr attribute congest_max_retries set to 4096• A read rate of 20, 40, and 80 requests per second per LUN• 1K block size • The total number of LUNs in the test was changed from 128, 256, 512, 1024, 2048, and 4096.

These were spread evenly across the servers.

NoteThe value of 4096 was chosen for the max_retries and congest_max_retries to ensure that the test application did not get failures because of retries.

Figure 5 shows how the system behaves with the current software when managing the queue depth at the LUN. For a more detailed graph, see Additional graphs, Figure 9.

Page 7: HP-UX 11i v3 Congestion Control Management for Storage

7

Figure 5. LUN based results

1

10

100

1000

10000

100000

1000000

10000000

128 256 512 1024 2048 4096

mill

isec

onds

LUNs

LUN Based Management

RSP Time

Peak RSP Time

QFull

This graph shows the Queue Full (QFull) messages start to climb as the number of LUNs increases. (The QFull count is the total count that occurred during the test.) As the QFull count increases, the Response time (RSP Tm) and Peak Response Time (Peak RSP) start to climb. This is due to the retries that the SCSI stack does to recover from the QFull message. The application measures how long it takes a request to be satisfied. A timer starts when a request is sent and is stopped when a response is read. As the SCSI stack does more retries, the time between the request and the response keeps growing. This results in higher Peak RSP. The higher Peak RSP results in a slower application.

Target Port Based Queue Depth EnabledUsing the same configuration as in Figure 4, a read-only test using a disk I/O test program was run with the following parameters:

• scsimgr attribute max_retries set to 45 (default)• scsimgr attribute congest_max_retries set to 90 (default)• scsimgr attribute tport_max_qdepth set to 512 (XP12000 default (2048) /Number of Nodes

(4))• A read rate of 10, 20, 40, and 80 requests per LUN per second• 1 KB I/O size• The total number of LUNs in the test changed from 128, 256, 512, 1024, 2048, and 4096.• The storage system used one port to demonstrate TPort based management.

By using the TPort-based Congestion Control Management feature provided in the HP-UX SCSI stack, the system monitors the queue depth of the TPort. This keeps the server from overflowing the queue of the TPort. When LUN requests exceed the TPort queue, the requests are queued on the TPort queue in the server and are sent as soon as there is space on the target.

Page 8: HP-UX 11i v3 Congestion Control Management for Storage

8

Figure 6. Comparison of LUN and Tport based management

1

10

100

1000

10000

100000

1000000

10000000

128 256 512 1024 2048 4096

mill

isec

onds

LUNs

LUN versus Tport Based Management

RSP Time LUN Based

Peak RSP Time LUN Based

RSP Time Tport Based

Peak RSP Time Tport Based

In this case, the QFull condition disappeared. Also, the Response Time (RSP Time) reduced by a factor of 14 and the Peak Response Time (Peak RSP Time) reduced by a factor of six. This is an improvement in server response time and application completion time.

Configuration GuidelinesYou must determine the size of each system’s target port queue. Typically, this is the size of the TPort queue divided by the number of HBAs (see Default Target Port Queue Sizes for some well-known values). Of course, this is a starting point. Depending on how you are using the servers, the configured depth might be different. If the settings are to survive a reboot, use the save_attr option to scsimgr. Use the set_attr option for a temporary setting.

To set a particular server target port queue, use the following command:

# scsimgr [set,save]_attr -H tgtpath \-a tport_max_qdepth=depth \-a tport_qd_dyn=0 -a tport_qdepth_enable=1To set all devices under a specific target class, use the following command:

# scsimgr [set,save]_attr -N scope-of-target \-a tport_max_qdepth=depth \-a tport_qd_dyn=0 -a tport_qdepth_enable=1

ExampleFor a storage system like the XP series, which has a default Target Port queue depth of 2048, and using four servers with one HBA or two servers with two HBAs connected to the storage system, the tport_max_qdepth is set to 512 as follows:

# scsimgr [set,save]_attr \-N "/escsi/esctl/0xc/HP /OPEN-XP12000 "\-a tport_max_qdepth=512 -a tport_qd_dyn=0 \-a tport_qdepth_enable=1If the scope needed is not configured, see Configuring scsimgr scope.

Page 9: HP-UX 11i v3 Congestion Control Management for Storage

9

Dynamic Target Port Based Queue Depth Enabled (Clusters)Figure 7 shows an HP Serviceguard cluster system configuration.

Figure 7. HP Serviceguard cluster configuration

SANTPort

Server A

Server B

Server D

Server C

XP12000Storage Array

HP ServiceguardCluster

Using this configuration, a read-only test using a disk I/O test program the following parameters:

• The HP Serviceguard cluster was set up that shared all of the LUNs between all of the servers.• scsimgr attribute max_retries set to 45 (default)• scsimgr attribute congest_max_retries set to 90 (default)• scsimgr attribute tport_max_qdepth set to 512 (XP12000 default (2048) /Number of Nodes

(4))• A read rate of 10, 20, 40, and 80 requests per LUN per second• 1 KB I/O size• The total number of LUNs in the test changed from 128, 256, 512, 1024, 2048, and 4096.

Unevenly spread LUNs were used to test the dynamic element.• The storage system used one port to demonstrate TPort based management.• The LUN access was unbalanced, meaning that each server can have different loads.

By using the dynamic TPort based Congestion Control Management feature provided in the HP-UX 11i v3 SCSI stack, the system monitors the queue depth of the TPort and dynamically rebalances the load between the servers in an HP Serviceguard environment. Figure 8 shows the results.

Page 10: HP-UX 11i v3 Congestion Control Management for Storage

10

Figure 8. Comparison of Tport and dynamic base management

1

10

100

1000

10000

100000

1000000

10000000

128 256 512 1024 2048 4096

mill

isec

onds

LUNs

Tport versus Dynamic Based Management

RSP Time Tport Based

Peak RSP Time Tport Based

RSP Time Dynamic

Peak RSP Time Dynamic

The servers communicate among each other every second to rebalance the queue depth among the servers that are available. Take special note of the Peak RSP Time as compared to the previous test. It is much more consistent in this case. This shows that the rebalancing is working well.

Configuration GuidelinesFor a clustered system, you must determine the size of the TPort queue depth for maximum utilization. (See Default Target Port Queue Sizes for some well-known values) Typically, this is the size of the TPort queue as configured on the array. If the settings are to survive a reboot, use the save_attr option to scsimgr. For a temporary setting, use the set_attr option.

To set a particular server target port queue, use the following command:

# scsimgr [set,save]_attr -H tgtpath \-a tport_max_qdepth=depth \-a tport_qd_dyn=1 -a tport_qdepth_enable=1To set all devices under specific class, use the following command:

# scsimgr [set,save]_attr -N scope-of-target \-a tport_max_qdepth=depth \-a tport_qd_dyn=0 -a tport_qdepth_enable=1

ExampleFor a storage system like the XP series, which has a default Target Port queue depth of 2048, and using four servers with one HBA or two servers with two HBAs connected to the storage system, the tport_max_qdepth is set to 512 as follows:

# scsimgr [set,save]_attr \-N "/escsi/esctl/0xc/HP /OPEN-XP12000 "\-a tport_max_qdepth=512 -a tport_qd_dyn=1 \-a tport_qdepth_enable=1If the scope needed is not configured, see Configuring scsimgr scope.

Page 11: HP-UX 11i v3 Congestion Control Management for Storage

11

Additional Information

Software RequirementsHP-UX 11i v3 September 2009 Fusion release is required. If using HP Serviceguard, you must installthe PHSS_39614 patch.

Page 12: HP-UX 11i v3 Congestion Control Management for Storage

12

Additional Graphs

Figure 9. LUN-based queuing results

1 10 100 1000 10000 100000 1000000 10000000

io's/sec

MB/sec

RSP Time in ms

Peak RSP Time in ms

Q Full count

io's/sec MB/sec RSP Time in ms Peak RSP Time in ms Q Full count

LUNs 4096 Read Rate 80 45840 46.9 187654.932 6805671.5 9303633

LUNs 4096 Read Rate 40 22920 23.5 29159.937 741010.8 1367128

LUNs 4096 Read Rate 20 12647 13 3446.528 80059.2 25241

LUNs 2048 Read Rate 80 45920 47 19181.83 868610.7 102194

LUNs 2048 Read Rate 40 22920 23.5 16054.357 331567.8 515388

LUNs 2048 Read Rate 20 12508 12.8 2825.535 113609.5 29818

LUNs 1024 Read Rate 80 46000 47.1 20171.955 1087295.8 3357

LUNs 1024 Read Rate 40 22920 23.5 44521.478 1023734.6 2574347

LUNs 1024 Read Rate 20 12633 12.9 2885.911 111284.6 33136

LUNs 512 Read Rate 80 32720 33.5 37363.991 882039.1 3028772

LUNs 512 Read Rate 40 17794 18.2 11099.633 246612 267335

LUNs 512 Read Rate 20 9339 9.6 2007.516 90010.9 9804

LUNs 256 Read Rate 80 17682 18.1 7149.11 87161.9 22295

LUNs 256 Read Rate 40 8820 9 2820.138 52386.9 0

LUNs 256 Read Rate 20 5544 5.7 720.969 28579.3 0

LUNs 128 Read Rate 80 9373 9.6 1438.684 58310.9 2634

LUNs 128 Read Rate 40 7007 7.2 572.046 42975.1 0

LUNs 128 Read Rate 20 5327 5.5 323.092 15911.1 0

LUN Based Queuing

Page 13: HP-UX 11i v3 Congestion Control Management for Storage

13

Figure 10. Tport-based queuing results

1 10 100 1000 10000 100000 1000000 10000000

io's/sec

MB/sec

RSP Time in ms

Peak RSP Time in ms

Q Full count

io's/sec MB/sec RSP Time in ms Peak RSP Time in ms Q Full count

LUNs 4096 Read Rate 80 45873 47 12634.049 1032904.7 0

LUNs 4096 Read Rate 40 25054 25.7 8732.088 29315.1 0

LUNs 4096 Read Rate 20 12291 12.6 4609.654 13224.8 0

LUNs 2048 Read Rate 80 45900 47 13463.053 264850.2 0

LUNs 2048 Read Rate 40 25142 25.7 7735.453 16880.8 0

LUNs 2048 Read Rate 20 12268 12.6 4544.261 9175.6 0

LUNs 1024 Read Rate 80 45965 47.1 17880.723 741205.2 0

LUNs 1024 Read Rate 40 25125 25.7 7845.36 18895.8 0

LUNs 1024 Read Rate 20 12290 12.6 4645.208 11108.7 0

LUNs 512 Read Rate 80 32799 33.6 12700.464 86585 0

LUNs 512 Read Rate 40 18898 19.4 6580.83 17031.6 0

LUNs 512 Read Rate 20 8991 9.2 3510.292 10172.5 0

LUNs 256 Read Rate 80 16320 16.7 5616.56 19075.9 0

LUNs 256 Read Rate 40 9684 9.9 2582.625 15532.5 0

LUNs 256 Read Rate 20 4936 5.1 1402.467 12344.4 0

LUNS 128 Read Rate 80 8160 8.4 2576.669 18794.5 0

LUNs 128 Read Rate 40 6490 6.6 764.628 14968.4 0

LUNs 128 Read Rate 20 4602 4.7 379.297 6524.9 0

Target Port Based Queuing

Page 14: HP-UX 11i v3 Congestion Control Management for Storage

14

Figure 11. Dynamic target port based queuing results

1 10 100 1000 10000 100000 1000000

io's/sec

MB/sec

RSP Time in ms

Peak RSP Time in ms

Q Full count

io's/sec MB/sec RSP Time in msPeak RSP Time in

ms Q Full count

LUNs 4096 Read Rate 80 46014 47.1 13081.048 905879.1 0

LUNs 4096 Read Rate 40 25094 25.7 8073.399 31198.2 0

LUNs 4096 Read Rate 20 12263 12.6 4861.888 22761.3 0

LUNs 2048 Read Rate 80 45942 47 11817.306 796924.6 0

LUNs 2048 Read Rate 40 24997 25.6 6979.882 31379.6 0

LUNs 2048 Read Rate 20 12310 12.6 4721.66 22555.4 0

LUNs 1024 Read Rate 80 45897 47 11216.174 486301 0

LUNs 1024 Read Rate 40 25088 25.7 7073.654 30887.7 0

LUNs 1024 Read Rate 20 12309 12.6 4829.691 29967.1 0

LUNs 512 Read Rate 80 32816 33.6 13143.812 31353.3 0

LUNs 512 Read Rate 40 17866 18.3 6706.355 31444.7 0

LUNs 512 Read Rate 20 9056 9.3 3162.209 29748.7 0

LUNs 256 Read Rate 80 16320 16.7 5429.122 31305.9 0

LUNs 256 Read Rate 40 9638 9.9 2712.789 18948.1 0

LUNs 256 Read Rate 20 4920 5 1414.284 10943.8 0

LUNS 128 Read Rate 80 8160 8.4 2607.471 18414.9 0

LUNs 128 Read Rate 40 6284 6.4 784.605 13633.3 0

LUNs 128 Read Rate 20 4612 4.7 375.155 4996.9 0

Dynamic Target Port Based Queuing

Configuring scsimgr ScopeTo configure the scope for the scsimgr command, you might need to create it first. To see if it is already configured, enter the following command:

# scsimgr ddr_listIf the scope needed does not exist, use the ioscan command as follows:

Page 15: HP-UX 11i v3 Congestion Control Management for Storage

15

# ioscan -fnkNC ctl

Class I H/W Path Driver S/W State H/W Type Description==================================================================ctl 194 64000/0xfa00/0x3e esctl CLAIMED DEVICE HP HSV210 /dev/pt/pt194ctl 222 64000/0xfa00/0xd5 esctl CLAIMED DEVICE HP OPEN-XP12000 /dev/pt/pt222ctl 223 64000/0xfa00/0xd6 esctl CLAIMED DEVICE HP OPEN-XP12000 /dev/pt/pt223ctl 199 64000/0xfa00/0x3ca esctl CLAIMED DEVICE EMC SYMMETRIX /dev/pt/pt199Then, to create the /escsi/esctl/0xc/* scope, follow these steps for each of the ctl class devices:

1. # scsimgr ddr_name -D /dev/pt/pt199 pid SETTABLE ATTRIBUTE SCOPE

"/escsi/esctl/0xc/EMC /SYMMETRIX "2. # scsimgr -f ddr_add -N "/escsi/esctl/0xc/EMC /SYMMETRIX "

scsimgr: settable attribute scope '/escsi/esctl/0xc/EMC /SYMMETRIX ' added successfully

For more information, see scsimgr(1M).

Disabling Congestion Control ManagementFor all the different configurations, to disable Congestion Control Management and have the settings survive a reboot, use the following scsimgr save_attr option:

# scsimgr save_attr -N scope-of-array \-a tport_qdepth_enable=0 –a tport_qd_dyn=0For a temporary setting, use the set_attr option.

To reset one target, use the following command:

# scsimgr [set,save]_attr -H Hardware-path-of-target \-a tport_qdepth_enable=0 –a tport_qd_dyn=0

Default Target Port Queue SizesThe default target port queue size is 1024. The following table lists the target port queue size for different hardware.

Vendor Model Target Port Queue Size

HP OPEN-XP24000 4096

HP OPEN-XP12000 2048

HP OPEN-XP10000 2048

HP OPEN-XP1024 1024

HP OPEN-XP512 512

HP HSV200 1536

HP HSV210 1536

Page 16: HP-UX 11i v3 Congestion Control Management for Storage

16

HP HSV300 1536

HP MSA Controller 512

EMC Symmetrix 4096

Glossary• Cluster: A group of systems working together using the same storage• congest_max_retries: Maximum number of I/O retries when congested. For more information, see scsimgr_esdisk(1M).

• Controller: A mass storage HBA in the server.• ddr: Device Data Repository• HBA: Physical interface card that plugs into the server• HP Integrity VM: HP-UX implementation of virtual machines• initiator port: HBA connection on a server.• I-T: A nexus between a SCSI initiator port and a SCSI target port.• I-T-L: A nexus between a SCSI initiator port, a SCSI target port, and a logical unit.• LUN: Logical Unit Number (physical or virtual disk)• LUN-Path: Path through which a LUN is discovered.• max_q_depth: Maximum queue depth per LUN in the server. For more information, see scsimgr_esdisk(1M).

• max_retries: Maximum number of I/O retries. For more information, see scsimgr_esdisk(1M).• Nexus: A relationship between two SCSI devices and the SCSI initiator port and SCSI target port

objects within those SCSI devices.• Shared Storage: A collection of devices used by multiple systems but not necessarily sharing the

data as in a cluster.• Standalone: A server that does not share LUNs with another server.• Target: Device that the initiator connects to in the storage system.• Target-Path: Path through which a target is discovered.• Tport: Target port on the disk array.

Page 17: HP-UX 11i v3 Congestion Control Management for Storage

For More Information To learn more about the Mass Storage Stack and HP-UX system administration, see the documents on the HP Business Support Center at:

http://www.hp.com/go/hpux-core-docs

Then, click on HP-UX 11i v3.

Call to ActionHP welcomes your input. Please give us comments about this white paper, or suggestions for the Mass Storage Stack or related documentation, through our technical documentation feedback website:

http://docs.hp.com/en/feedback.html

Share with colleagues

© Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Trademark acknowledgments, if needed.

HP Part Number 5900-0596, March 2010