44
GPFS Frequently Asked Questions and Answers GPFS Overview The IBM ® General Parallel File System (GPFS ) is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Applications can readily access files using standard file system interfaces, and the same file can be accessed concurrently from multiple nodes. GPFS is designed to provide high availability through advanced clustering technologies, dynamic file system management and data replication. GPFS can continue to provide data access even when the cluster experiences storage or node malfunctions. GPFS scalability and performance are designed to meet the needs of data intensive applications such as engineering design, digital media, data mining, relational databases, financial analytics, seismic data processing, scientific research and scalable file serving. GPFS is supported on AIX ® , Linux and Windows Server operating systems. It is supported on IBM POWER ® and IBM System x ® ServerProven ® Intel or AMD Opteron based servers. For more information on the capabilities of GPFS and its applicability to your environment, see the GPFS: Concepts, Planning, and Installation Guide. GPFS FAQ The GPFS Frequently Asked Questions and Answers provides you the most up-to-date information on topics including ordering GPFS, supported platforms and supported configuration sizes and capacities. This FAQ is maintained on a regular basis and should be referenced before any system upgrades or major configuration changes to your GPFS cluster. If you have any comments, suggestions or questions regarding the information provided here you can send email to [email protected]. Updates to this FAQ include: Table 1. June 2011 1.4 What is a GPFS Server? 1.8 How do I determine the number of licenses required in a virtualization environment? 1.9 Can I transfer my GPFS licenses? 1.14 Is GPFS available in IBM PartnerWorld? 2.3 What are the latest distributions and kernel levels that GPFS has been tested with? 2.4 What are the current restrictions on GPFS Linux kernel support? 2.5 Is GPFS on Linux supported in a virtualization environment? 2.7 What are the limitations of GPFS support for Windows? 4.1 What disk hardware has GPFS been tested with? 6.3 What configuration requirements exist for utilizing Remote Direct Memory Access (RDMA) offered by InfiniBand? 6.10 How do I determine the maximum size of the extended attributes allowed in my file system? 7.5 What are the current advisories for GPFS on Linux? 7.6 What are the current advisories for GPFS on Windows? 7.10 Where can I locate GPFS code to upgrade from my current level of GPFS?

2011.06.GPFS FAQ

  • View
    199

  • Download
    2

Embed Size (px)

Citation preview

Page 1: 2011.06.GPFS FAQ

GPFS Frequently AskedQuestions and Answers ���

GPFS Overview

The IBM® General Parallel File System (GPFS™) is a high performance shared-disk file managementsolution that provides fast, reliable access to data from multiple nodes in a cluster environment.Applications can readily access files using standard file system interfaces, and the same file can beaccessed concurrently from multiple nodes. GPFS is designed to provide high availability throughadvanced clustering technologies, dynamic file system management and data replication. GPFS cancontinue to provide data access even when the cluster experiences storage or node malfunctions. GPFSscalability and performance are designed to meet the needs of data intensive applications such asengineering design, digital media, data mining, relational databases, financial analytics, seismic dataprocessing, scientific research and scalable file serving.

GPFS is supported on AIX®, Linux and Windows Server operating systems. It is supported on IBMPOWER® and IBM System x® ServerProven® Intel or AMD Opteron based servers. For more informationon the capabilities of GPFS and its applicability to your environment, see the GPFS: Concepts, Planning,and Installation Guide.

GPFS FAQ

The GPFS Frequently Asked Questions and Answers provides you the most up-to-date information ontopics including ordering GPFS, supported platforms and supported configuration sizes and capacities.This FAQ is maintained on a regular basis and should be referenced before any system upgrades or majorconfiguration changes to your GPFS cluster. If you have any comments, suggestions or questionsregarding the information provided here you can send email to [email protected].

Updates to this FAQ include:

Table 1.

June 2011

1.4 What is a GPFS Server?

1.8 How do I determine the number of licenses required in a virtualization environment?

1.9 Can I transfer my GPFS licenses?

1.14 Is GPFS available in IBM PartnerWorld?

2.3 What are the latest distributions and kernel levels that GPFS has been tested with?

2.4 What are the current restrictions on GPFS Linux kernel support?

2.5 Is GPFS on Linux supported in a virtualization environment?

2.7 What are the limitations of GPFS support for Windows?

4.1 What disk hardware has GPFS been tested with?

6.3 What configuration requirements exist for utilizing Remote Direct Memory Access (RDMA)offered by InfiniBand?

6.10 How do I determine the maximum size of the extended attributes allowed in my file system?

7.5 What are the current advisories for GPFS on Linux?

7.6 What are the current advisories for GPFS on Windows?

7.10 Where can I locate GPFS code to upgrade from my current level of GPFS?

Page 2: 2011.06.GPFS FAQ

Questions & Answers

1. General questions:1.1 How do I order GPFS?1.2 Where can I find ordering information for GPFS?1.3 How is GPFS priced?1.4 What is a GPFS Server?1.5 What is a GPFS Client?1.6 I am an existing customer, how does the new pricing affect my licenses and entitlements?1.7 What are some examples of the new pricing structure?1.8 How do I determine the number of licenses required in a virtualization environment?1.9 Can I transfer my GPFS licenses?1.10 Where can I find the documentation for GPFS?1.11 What resources beyond the standard documentation can help me learn and use GPFS?1.12 How can I ask a more specific question about GPFS?1.13 Does GPFS participate in the IBM Academic Initiative Program?1.14 Is GPFS available in IBM PartnerWorld?

2. Software questions:2.1 What levels of the AIX O/S are supported by GPFS?2.2 What Linux distributions are supported by GPFS?2.3 What are the latest distributions and kernel levels that GPFS has been tested with?2.4 What are the current restrictions on GPFS Linux kernel support?2.5 Is GPFS on Linux supported in a virtualization environment?2.6 What levels of the Windows O/S are supported by GPFS?2.7 What are the limitations of GPFS support for Windows?2.8 What are the requirements for the use of OpenSSH on Windows nodes?2.9 Can different GPFS maintenance levels coexist?2.10 Are there any requirements for Clustered NFS (CNFS) support in GPFS?2.11 Does GPFS support NFS V4?2.12 Are there any requirements for Persistent Reserve support in GPFS ?2.13 Are there any considerations when utilizing the Simple Network Management Protocol(SNMP)-based monitoring capability in GPFS?

3. Machine questions:3.1 What are the minimum hardware requirements for a GPFS cluster?3.2 Is GPFS for POWER supported on IBM System i® servers?3.3 What machine models has GPFS for Linux been tested with?3.4 On what servers is GPFS supported?3.5 What interconnects are supported for GPFS daemon-to-daemon communication in my GPFScluster?3.6 Does GPFS support exploitation of the Virtual I/O Server (VIOS) features of POWER processors?

4. Disk questions:4.1 What disk hardware has GPFS been tested with?

2

|

|

Page 3: 2011.06.GPFS FAQ

4.2 What Fibre Channel (FC) Switches are qualified for GPFS usage and is there a FC Switch supportchart available?4.3 Can I concurrently access disks from both AIX and Linux nodes in my GPFS cluster?4.4 What disk support failover models does GPFS support for the IBM TotalStorage DS4000® family ofstorage controllers with the Linux operating system?4.5 What devices have been tested with SCSI-3 Persistent Reservations?

5. Scaling questions:5.1 What are the GPFS cluster size limits?5.2 What is the current limit on the number of nodes that may concurrently join a cluster?5.3 What are the current file system size limits?5.4 What is the current limit on the number of mounted file systems in a GPFS cluster?5.5 What is the architectural limit of the number of files in a file system?5.6 What are the limitations on GPFS disk size?5.7 What is the limit on the maximum number of groups a user can be a member of when accessing aGPFS file system?

6. Configuration and tuning questions:6.1 What specific configuration and performance tuning suggestions are there?6.2 What configuration and performance tuning suggestions are there for GPFS when used primarilyfor Oracle databases?6.3 What configuration requirements exist for utilizing Remote Direct Memory Access (RDMA) offeredby InfiniBand?6.4 What Linux configuration settings are required when NFS exporting a GPFS filesystem?6.5 Sometimes GPFS appears to be handling a heavy I/O load, for no apparent reason. What could becausing this?6.6 What considerations are there when using IBM Tivoli® Storage Manager with GPFS?6.7 How do I get OpenSSL to work on AIX with GPFS?6.8 What ciphers are supported for use by GPFS?6.9 When I allow other clusters to mount my file systems, is there a way to restrict access permissionsfor the root user?6.10 How do I determine the maximum size of the extended attributes allowed in my file system?

7. Service questions:7.1 What support services are available for GPFS?7.2 How do I download fixes for GPFS?7.3 What are the current advisories for all platforms supported by GPFS?7.4 What are the current advisories for GPFS on AIX?7.5 What are the current advisories for GPFS on Linux?7.6 What are the current advisories for GPFS on Windows?7.7 What Linux kernel patches are provided for clustered file systems such as GPFS?7.8 Where can I find the GPFS Software License Agreement?7.9 Where can I find End of Market (EOM) and End of Service (EOS) information for GPFS?7.10 Where can I locate GPFS code to upgrade from my current level of GPFS?7.11 Are there any items that will no longer be supported in GPFS ?

3

|

Page 4: 2011.06.GPFS FAQ

General questionsQ1.1: How do I order GPFS?A1.1: To order GPFS:

v To order GPFS on POWER for AIX or Linux (5765-G66), find contact information for yourcountry at http://www.ibm.com/planetwide/

v To order GPFS for Linux or Windows on x86 Architecture (5765-XA3) (Note: GPFS on x86Architecture is now available to order in the same IBM fulfillment system as GPFS onPOWER), find contact information for your country at http://www.ibm.com/planetwide/

v To order GPFS for Linux or Windows on x86 Architecture (5724-N94):– go to the Passport Advantage® site at http://www.ibm.com/software/lotus/

passportadvantage/– use the IBM System x fulfillment and ordering system for GPFS V3.3 (5641-A07) or GPFS

V3.2 (5641-N94)

Note: GPFS on x86 Architecture (5724-N94) is a renaming of the previously available GPFS forLinux and Windows Multiplatform offering.

Q1.2: Where can I find ordering information for GPFS?A1.2: You can view ordering information for GPFS in:

v The Cluster Software Ordering Guide at http://www.ibm.com/systems/clusters/software/reports/order_guide.html

v The GPFS Announcement Letters Sales Manual at http://www.ibm.com/common/ssi/index.wss1. Select your language preference and click Continue.2. From Type of content menu, choose Announcement letter and click on the right arrow.3. Choose the corresponding product number to enter in the product number field

– For General Parallel File System for POWER, enter 5765-G66– For General Parallel File System x86 Architecture, enter the appropriate order number;

either 5724-N94 or 5765-XA3Q1.3: How is GPFS priced?A1.3: A new pricing, licensing, and entitlement structure for Version 3.2 and follow-on releases of GPFS

has been announced:v http://www.ibm.com/common/ssi/rep_ca/5/897/ENUS209-105/ENUS209-105.PDFv http://www.ibm.com/common/ssi/rep_ca/6/897/ENUS209-106/ENUS209-106.PDF

GPFS has two types of licenses, a Server license and a Client license (licenses are priced perprocessor core). For each node in a GPFS cluster, the customer determines the appropriatenumber of GPFS Server licenses or GPFS Client licenses that correspond to the way GPFS is usedon that node (a node is defined as one operating system instance on a single computer orrunning in a virtual partition). For further information, see the related questions below.

Q1.4: What is a GPFS Server?A1.4: A GPFS Server license must be used in order to perform the following GPFS functions:

1. Management functions such as cluster configuration manager, quorum node, manager node,and Network Shared Disk (NSD) server.

2. Sharing data directly through any application, service protocol, or method, such as NetworkFile System (NFS), Common Internet File System (CIFS), File Transfer Protocol (FTP), orHypertext Transfer Protocol (HTTP).

Q1.5: What is a GPFS Client?A1.5: You may use a GPFS Client in order to exchange data between nodes that locally mount the same

GPFS file system.

Note: A GPFS Client may not be used for nodes to share GPFS data directly through anyapplication, service, protocol, or method, such as NFS, CIFS, FTP, or HTTP. For this use,entitlement to a GPFS Server is required.

Q1.6: I am an existing customer, how does the new pricing affect my licenses and entitlements?A1.6: Prior to renewal, the customer must identify the actual number of GPFS Client licenses and GPFS

Server licences required for their configuration based on the usage defined in the questions What

4

|

Page 5: 2011.06.GPFS FAQ

is a GPFS Client ? and What is a GPFS Server ? A customer with a total of 50 entitlements forGPFS will maintain 50 entitlements. Those entitlements will be split between GPFS Servers andGPFS Clients depending upon the required configuration. For existing customers renewingentitlements, they must contact their IBM representatives to migrate their current licenses to theGPFS Server and GPFS Client model. For existing x86 Architecture Passport Advantagecustomers, your entitlements will have been migrated to the new GPFS Server and GPFS Clientmodel prior to the renewal date. However, you will need to review and adjust those entitlementsat the time of your renewal.

Q1.7: What are some examples of the new pricing structure?A1.7:

GPFS is orderable through multiple methods at IBM. One of these uses PVU's and the other usessmall, medium and large. Your IBM sales representative can help you determine which method isappropriate for your situation.

Pricing examples include:

GPFS for POWER (5765-G66) and GPFS on x86 Architecture (5765-XA3)

Licenses continue to be priced per processor core.

Common small commercial Power Systems™ cluster where virtualization is used:v You have a cluster that consists of four Power 570 systems. Each system has eight processor

cores per physical system and is partitioned into two LPARs with four processor cores perLPAR for a total of 8 LPARS running GPFS. All of the nodes access the disk through a SAN.

v Three of the LPARs are configured as quorum nodes. Since these nodes are running GPFSmanagement tasks (i.e. quorum) they require a GPFS Server license. Three nodes with fourCPUs each means you will need 12 Server licenses.

v Five of the LPARs are configured as non-quorum nodes. These nodes do not run GPFSmanagement tasks. So five nodes with four CPUs each means you will need 20 Client licenses.

Table 2. x86 Architecture processor tier values

ProcessorVendor Processor Brand

Processor ModelNumber Processor Tier

Intel Xeon (Nehalem EX) 7500 to 75996500 to 6599

≥4 sockets per server = large2 sockets per server = medium

Xeon (Nehalem EP) 3400 to 35995500 to 5699

medium

Xeon (pre-Nehalem) 3000 to 33995000 to 54997000 to 7499

small

AMD Opteron all existing small

ANY Any single-core (i.e. XeonSingle-Core)

all existing large

GPFS on x86 Architecture (5724-N94)

Licenses continue to be priced per 10 Processor Value Units (PVU). For example, 1AMD Opteroncore requires 50 PVUs

PROCESSOR VALUE UNIT

PVU is the unit of measure by which this program is licensed. PVU entitlements are based onprocessor families (vendors and brands). A Proof of Entitlement (PoE) must be obtained for theappropriate number of PVUs based on the level or tier of all processor cores activated andavailable for use by the Program on the server. Some programs allow licensing to less than thefull capacity of the servers activated processor cores, sometimes referred to as sub-capacity

5

Page 6: 2011.06.GPFS FAQ

licensing. For programs which offer sub-capacity licensing, if a server is partitioned utilizingeligible partitioning technologies, then a PoE must be obtained for the appropriate number ofPVUs based on all activated processor cores available for use in each partition where the programruns or is managed by the program. Refer to the International Passport Advantage AgreementAttachment for Sub-Capacity Terms or the programs License Information to determine applicablesub-capacity terms. The PVU entitlements are specific to the program and may not be exchanged,interchanged, or aggregated with PVU entitlements of another program.

For general overview of PVUs for processor families (vendors and brands), go tohttp://www.ibm.com/software/lotus/passportadvantage/pvu_licensing_for_customers.html

To calculate the exact PVU entitlements required for the program, go to https://www-112.ibm.com/software/howtobuy/passportadvantage/valueunitcalculator/vucalc.wss

Common System x HPC setup with no virtualization:v You have four x3655 systems with eight cores each. In addition you have 32 x3455 systems

each with four processor cores. Each physical machine is a GPFS node (no virtualization).v The four x3655 nodes are configured as NSD servers and quorum nodes. Therefore they are

serving data and providing GPFS management services so they require a GPFS Server license.Four nodes each with eight AMD Opteron cores means you have a total of 32 cores. Each AMDOpteron core is worth 50 PVUs and each Server license is worth 10 PVUs so you will need 160GPFS Server licenses. (32 AMD Opteron cores*50 PVUs)/10 PVUs per Server License = 160GPFS Server licenses.

v The 32 x3455 nodes are all configured as NSD clients. So you have 32 nodes each with fourcores for a total of 128 cores. Each AMD Opteron core is worth 50 PVUs and each Clientlicense is worth 10 PVUs so you will need 640 GPFS Client licenses. (128 AMD Opteroncores*50 PVUs)/10 PVUs per Client license = 640 client licenses.

For further information contact:v [email protected] In the United States, please call 1-888-SHOP-IBMv In all other locations, please contact your IBM Marketing Representative. For a directory of

worldwide contact, see www.ibm.com/planetwide/index.htmlQ1.8: How do I determine the number of licenses required in a virtualization environment?A1.8: The number of processors for which licenses are required for is the smaller of the following:

v The total number of activated processors in the machinev Or

1. When GPFS nodes are in partitions with dedicated processors, then licenses are required forthe number of processors dedicated to those partitions.

2. When GPFS nodes are LPARs that are members of a shared processing pool, then licensesare required for the smaller of:– the number of processors assigned to the pool or– the sum of the virtual processors of each uncapped partition plus the processors in each

capped partition

For Linux virtualized NSD clients, the number of licenses required is equal to the physical coresavailable to GPFS.

When the same processors are available to both GPFS Server nodes and GPFS Client nodes, GPFSServer licenses are required for those processors.

Any fractional part of a processor in the total calculation must be rounded up to a full processor.

Examples:1. One GPFS node is in a partition with .5 of a dedicated processor → license(s) are required for 1

processor2. 10 GPFS nodes are in partitions on a machine with a total of 5 activated processors → licenses

are required for 5 processors3. LPAR A is a GPFS node with an entitled capacity of say, 1.5 CPUs is set to uncapped in a

processor pool of 5 processors.

6

||

Page 7: 2011.06.GPFS FAQ

LPAR A is used in a way that requires server licenses.LPAR B is a GPFS node that is on the same machine as LPAR A and is also part of the sharedprocessor pool as LPAR A.LPAR B is used in a way that does not require server licenses so client licenses are sufficient.LPAR B has an entitled capacity of 2 CPUs, but since it too is uncapped, it can use up to 5processors out of the pool.For this configuration server licenses are required for 5 processors.

Q1.9: Can I transfer my GPFS licenses?A1.9: GPFS Licenses may be transferred between machines as per the International Agreement for

Acquisition of Software Maintenance located at http://www-03.ibm.com/software/sla/sladb.nsf/sla/iaasm/ .

Q1.10: Where can I find the documentation for GPFS?A1.10: The GPFS documentation is available in both PDF and HTML format on the Cluster Information

Center at publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfsbooks.html.

Q1.11: What resources beyond the standard documentation can help me learn about and use GPFS?A1.11: For additional information regarding GPFS see:

v GPFS forums:– The GPFS technical discussion forum at www.ibm.com/developerworks/forums/

dw_forum.jsp?forum=479&cat=13 will help answer your questions on installing and runningGPFS.

– For the latest announcements and news regarding GPFS please refer to the GPFS AnnounceForum at http://www.ibm.com/developerworks/forums/forum.jspa?forumID=1606 .

v GPFS Web pages:– The IBM Almaden Research GPFS page at www.almaden.ibm.com/StorageSystems/

file_systems/GPFS/index.shtml– The GPFS page at http://www-03.ibm.com/systems/software/gpfs/index.html– The GPFS Support Portal at http://www-947.ibm.com/support/entry/portal/Overview/

Software/Other_Software/General_Parallel_File_Systemv The IBM Systems Magazine site at http://www.ibmsystemsmag.com/ and search on GPFS.v The IBM Redbooks® and Redpapers site at www.redbooks.ibm.com and search on GPFS.

Q1.12: How can I ask a more specific question about GPFS?A1.12: Depending upon the nature of your question, you may ask it in one of several ways.

v If you want to correspond with IBM regarding GPFS:– If your question concerns a potential software error in GPFS and you have an IBM software

maintenance contract, please contact 1-800-IBM-SERV in the United States or your local IBMService Center in other countries. IBM Scholars Program users should notify the GPFSdevelopment team of potential software bugs through [email protected].

– If you have a question that can benefit other GPFS users, you may post it to the GPFStechnical discussion forum at www.ibm.com/developerworks/forums/dw_forum.jsp?forum=479&cat=13

– This FAQ is continually being enhanced. To contribute possible questions or answers, pleasesend them to [email protected]

v If you want to interact with other GPFS users, the San Diego Supercomputer Center maintainsa GPFS user mailing list. The list is [email protected] and those interested can subscribe tothe list at lists.sdsc.edu/mailman/listinfo/gpfs-general

If your question does not fall into the above categories, you can send a note directly to the GPFSdevelopment team at [email protected]. However, this mailing list is informally monitored astime permits and should not be used for priority messages to the GPFS team.

Q1.13: Does GPFS participate in the IBM Academic Initiative Program?A1.13: GPFS no longer participates in the IBM Academic Initiative Program.

If you are currently using GPFS with an education license from the Academic Initiative, we willcontinue to support GPFS 3.2 on a best-can-do basis via email for the licenses you have.However, no additional or new licenses of GPFS will be available from the IBM Academic

7

||||||

Page 8: 2011.06.GPFS FAQ

Initiative program. You should work with your IBM client representative on what educationaldiscount may be available for GPFS. See www.ibm.com/planetwide/index.html

Q1.14: Is GPFS available in IBM PartnerWorld?A1.14: Yes, GPFS for x86 and GPFS for Power are both available in IBM PartnerWorld. Search for

"General Parallel File System" in the Software Access catalog https://www-304.ibm.com/jct01004c/partnerworld/partnertools/eorderweb/ordersw.do

8

||||||

Page 9: 2011.06.GPFS FAQ

Software questionsQ2.1: What levels of the AIX O/S are supported by GPFS?A2.1:

Table 3. GPFS for AIX

GPFS AIX V7.1 AIX V6.1 AIX V5.3 AIX V5.2

GPFS V3.4 X

(GPFS 3.4.0-2, or later)

X X

GPFS V3.3 X

(GPFS 3.3.0-10, or later)

X X

GPFS V3.2 X

(GPFS 3.2.1-24, or later)

X X X

Notes:1. The following additional filesets are required by GPFS:

v xlC.aix50.rte (C Set ++ Runtime for AIX 5.0), version 8.0.0.0 or laterv xlC.rte (C Set ++ Runtime), version 8.0.0.0 or later

These can be downloaded from Fix Central at http://www.ibm.com/eserver/support/fixes/fixcentral

2. Enhancements to the support of Network File System (NFS) V4 in GPFS are only available onAIX V5.3 systems with the minimum technology level of 5300-04 applied, AIX V6.1 or AIXV7.1.

3. The version of OpenSSL shipped with some versions of AIX 7.1, AIX V6.1 and AIX V5.3 willnot work with GPFS due to a change in how the library is built. To obtain the level ofOpenSSL which will work with GPFS, see the question How do I get OpenSSL to work on AIX?

4. Service is required for GPFS to work with some levels of AIX, please see the question What arethe current advisories for GPFS on AIX?

Q2.2: What Linux distributions are supported by GPFS?A2.2: GPFS supports the following distributions:

Table 4. Linux distributions supported by GPFS

RHEL 6 RHEL 5 RHEL 4 SLES 11 SLES 10 SLES 9

GPFS for Linux onx86 Architecture

V3.4 X

(GPFS V3.4.0-2,or later)

X X

(GPFS V3.4.0-3,or later)

X X

V3.3 X

(GPFS V3.3.0-9,or later)

X X X X X

V3.2 X

(GPFS V3.2.1-24,or later)

X X X X X

GPFS for Linux onPOWER

9

Page 10: 2011.06.GPFS FAQ

Table 4. Linux distributions supported by GPFS (continued)

RHEL 6 RHEL 5 RHEL 4 SLES 11 SLES 10 SLES 9

V3.4 X

(GPFS V3.4.0-2,or later)

X X

(GPFS V3.4.0-3,or later)

X X

V3.3 X

(GPFS V3.3.0-9,or later)

X X X X X

V3.2 X

(GPFS V3.2.1-24,or later)

X X X X X

Please also see the questions:v What are the latest kernel levels that GPFS has been tested with?v What are the current restrictions on GPFS Linux kernel support?v Is GPFS on Linux supported in a virtualization environment?v What are the current advisories for all platforms supported by GPFS?v What are the current advisories for GPFS on Linux?

Q2.3: What are the latest kernel levels that GPFS has been tested with?A2.3: While GPFS runs with many different AIX fixes and Linux kernel levels, it is highly suggested

that customers apply the latest fix levels and kernel service updates for their operating system. Todownload the latest GPFS service updates, go to the GPFS page on Fix Central

Please also see the questions:v What Linux distributions are supported by GPFS?v What are the current restrictions on GPFS Linux kernel support?v Is GPFS on Linux supported in a virtualization environment?v What are the current advisories for all platforms supported by GPFS?v What are the current advisories for GPFS on Linux?

Note: GPFS for Linux on Itanium Servers is available only through a special ProgrammingRequest for Price Quotation (PRPQ). The install image is not generally available code. Itmust be requested by an IBM client representative through the RPQ system and approvedbefore order fulfillment. If interested in obtaining this PRPQ, reference PRPQ # P91232 orProduct ID 5799-GPS.

Table 5. GPFS for Linux RedHat support

RHEL Distribution Latest Kernel Level Tested Minimum GPFS Level

6.0 2.6.32-71 GPFS V3.4.0-2 / V3.3.0-9/ V3.2.1-24

5.6 2.6.18-238 GPFS V3.4.0-3/ V3.3.0-1 / 3.2.1-27

5.5 2.6.18-194 GPFS V3.4.0-1 / V3.3.0-5 / V3.2.1-20

5.4 2.6.18-164 GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1

5.3 2.6.18-128 GPFS V3.4.0-1 / V3.3.0-1/ V3.2.1-1

5.2 2.6.18-92.1.10 GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1

4.8 2.6.9-89 GPFS V3.4.0-3/ V3.3.0-1 / V3.2.1-1

4.7 2.6.9-78 GPFS V3.4.0-3/ V3.3.0-1 / V3.2.1-1

4.6 2.6.9-67.0.7 GPFS V3.4.0-3/ V3.3.0-1 / V3.2.1-1

10

|||

Page 11: 2011.06.GPFS FAQ

Table 6. GPFS for Linux SLES support

SLES Distribution Latest Kernel Level Tested Minimum GPFS Level

SLES 11 SP1 2.6.32.12-0.7.1 GPFS V3.4.0-1 / V3.3.0-7 / V3.2.1-24

SLES 11 2.6.27.19-5 GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-13

SLES 10 SP4 2.6.16.60-0.84.1 GPFS V3.4.0-1 / V3.3.0-1 / 3.2.1-27

(x86_64 and ppc64 only)

SLES 10 SP3 2.6.16.60-0.59.1 GPFS V3.4.0-1 / V3.3.0-5 / V3.2.1-18

SLES 10 SP2 2.6.16.60-0.27 GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1

SLES 10 SP1 2.6.16.53-0.8 GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1

SLES 10 2.6.16.21-0.25 GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1

SLES 9 SP4 2.6.5-7.312 GPFS V3.3.0-1 / V3.2.1-1

SLES 9 SP3 2.6.5-7.286 GPFS 3.3.0-1 / V3.2.1-1

Table 7. GPFS for Linux Itanium support

Distribution Latest Kernel Level Tested Minimum GPFS Level

RHEL 4.5 2.6.9-55.0.6

GPFS 3.4.0-1 / V3.3.0-1 / V3.2.1-1SLES 10 SP1 2.6.16.53-0.8

SLES 9 SP3 2.6.5-7.286

Q2.4: What are the current restrictions on GPFS Linux kernel support?A2.4: Current restriction on GPFS Linux kernel support include:

v GPFS does not support any Linux environments with SELinux.v GPFS has experienced memory leak issues with various levels of KSH. In order to address this

issue, please ensure that you are at the minimum required level of KSH or later:– RHEL 5 should be at ksh-20100202-1.el5_6.3, or later– SLES10 should be at ksh-93t-13.17.19 (shipped in SLES 10.4), or later– SLES11 should be at ksh-93t-9.9.8 (shipped in SLES11.1), or later

v For certain combinations of GPFS, Linux distribution type, and architecture, gpfs.base RPM hasa dependency that can not be satisfied by any package included in the Linux distribution. Inthose cases, the --nodeps rpm command option has to be used during the gpfs.base install:– GPFS 3.3 on RHEL6 (due to the dependency on /usr/bin/ksh)– GPFS 3.2 i386 on SLES10 SP4 (due to the dependency on libstdc++.so.5)

Note: Core GPFS V3.3 code expects to find ksh under /bin/ksh, but some sample scriptsshipped with GPFS V3.3 expect ksh under /usr/bin/ksh. These sample scripts willrequire a symbolic link from /usr/bin/ksh to /bin/ksh to be created.

v GPFS has the following restrictions on RHEL support:– GPFS does not currently support the Transparent Huge Page (THP) feature available in

RHEL 6.0.. This support should be disabled at boot time by appendingtransparent_hugepage=never to the kernel boot options.

– GPFS does not currently support the following kernels:- RHEL hugemem- RHEL largesmp- RHEL uniprocessor (UP)

– GPFS V3.4.0-2, 3.3.0-9, 3.2.1-24, or later supports RHEL 6.0When installing the GPFS 3.3 base RPMs on RHEL 6, a symbolic link /usr/bin/ksh to/bin/ksh is required to satisfy the /usr/bin/ksh dependency.

– GPFS V3.4.0.3 or later supports RHEL 4.– GPFS V3.3.0-5 or later supports RHEL 5.5– GPFS V3.2.1-20 or later supports RHEL 5.5

11

|||

|

|||||||||||

|||

Page 12: 2011.06.GPFS FAQ

– RHEL 5.0 and later on POWER requires GPFS V3.2.0.2 or later– RHEL5.1, the automount option is slow; this issue should be addressed in the 2.6.18-53.1.4

kernel.– RedHat Kernel 2.6.18-164.11.1 or later requires hot fix package for BZ567479. Please contact

RedHat support.v GPFS has the following restrictions on SLES support:

– GPFS V3.3.0-7 or later supports SLES 11 SP1.– GPFS V3.3.0-5 or later supports SLES 10 SP3.– GPFS V3.3 supports SLES 9 SP 3 or later.– GPFS V3.2.1-24 or later, supports SLES 11 SP1.– GPFS V3.2.1.13 or later supports SLES 11.– GPFS V3.2.1-10 or later supports the SLES10 SP2 2.6.16.60-0.34-bigsmp i386 kernel.– GPFS V3.2.1-18 or later supports SLES 10 SP3.– GPFS does not support SLES 10 SP3 on POWER 4 machines.– The GPFS 3.2 GPL build requires imake. If imake was not installed on the SLES 10 or SLES

11 system, install xorg-x11-devel-*.rpm.– There is required service for support of SLES 10. Please see question What is the current

service information for GPFS?v GPFS for Linux on POWER does not support mounting of a file system with a 16KB block size

when running on either RHEL 5 or SLES 11.v There is service required for Linux kernels 2.6.30 or later, or on RHEL5.4 (2.6.18-164.11.1.el5).

Please see question What are the current advisories for GPFS on Linux?

Please also see the questions:v What Linux distributions are supported by GPFS?v What are the latest kernel levels that GPFS has been tested with?v Is GPFS on Linux supported in a virtualization environment?v What are the current advisories for all platforms supported by GPFS?v What are the current advisories for GPFS on Linux?

Q2.5: Is GPFS on Linux supported in a virtualization environment?A2.5: You can install GPFS on a virtualization server or on a virtualized guest OS. When running GPFS

on a guest OS the guest must be an OS version that is supported by GPFS and run as an NSDclient. GPFS on Linux is supported in the following virtualization environments installed on thevirtualization servers:1. GPFS V3.2.1-3, GPFS V3.3.0-7 and V3.4, or later, support the RHEL xen kernel for NSD clients

only.2. GPFS V3.2.1-21, V3.3.0-7, and V3.4, or later, support the SLES xen kernel for NSD clients only.3. GPFS has been tested with VMware ESX 4.1 for NSD clients only and is supported on all

Linux distros that are supported by both VMware and GPFS.4. GPFS has been tested with guests on RHEL 6.0 KVM hosts for NSD clients only and is

supported on all Linux distros that are supported by both RHEL 6.0 kvm host and GPFS.Q2.6: What levels of the Windows O/S are supported by GPFS?A2.6:

Table 8. Windows O/S support

Windows Server 2003 R2 x64 Windows Server 2008 x64 (SP 2) Windows Server 2008 R2

GPFS V3.4 X X

GPFS V3.3 X X

GPFS V3.2.1-5 X

Also see the questions:1. What are the limitations of GPFS support for Windows ?2. What are the current advisories for all platforms supported by GPFS?3. What are the current advisories for GPFS on Windows?

12

||

Page 13: 2011.06.GPFS FAQ

Q2.7: What are the limitations of GPFS support for Windows ?A2.7: Current® limitations include:

v GPFS for Windows in not supported in any environment where Citrix Provisioning Services aredeployed.

v Though GPFS on Windows can be exported for file sharing via Common Internet File System(CIFS), its usage for scale-out high-performance file serving is not generally recommended. AGPFS CIFS server configuration that serves more than a handful of clients, or that involvesmeeting specific performance requirement must be first approved by IBM. Please [email protected] with your request

v GPFS for Windows does not support a file system feature called Directory Change Notification.This limitation can have adverse effects when GPFS files are exported using Windows filessharing. In detail, the issue relates to the SMB2 protocol used on Windows Vista and lateroperating systems. Because GPFS does not support Directory Change Notification, the SMB2redirector cache on the client will not see cache invalidate operations if metadata is changed onthe server or on another client. The SMB2 client will continue to see its cached version of thedirectory contents until the redirector cache expires. Hence, client systems may see aninconsistent view of the GPFS namespace. A workaround for this limitation is to disable theSMB2 protocol on the server. This will ensure that the SMB2 is not used even if the client isSMB2 capable. To disable SMB2, follow the instructions under the "MORE INFORMATION"section at http://support.microsoft.com/kb/974103

v In GPFS homogeneous Windows clusters (GPFS V3.4 or later), the Windows nodes can performmost of the management and administrative operations. The exceptions include:– Certain GPFS commands to apply policy, administer quotas and ACLs.– Support for the native Windows Backup utility.Please refer to the GPFS Concepts, Planning and Installation Guide for a full list of limitations.

v Tivoli Storage Manager (TSM) Backup Archive 6.2 client is only verified to work with GPFSV3.3. See the TSM Client Functional Compatibility Table at http://www-01.ibm.com/support/docview.wss?uid=swg21420322.

v There is no migration path from Windows Server 2003 R2 (GPFS V3.2.1-5 or later) to WindowsServer 2008 (GPFS V3.3).To move GPFS V3.2.1.5 or later Windows nodes to GPFS V3.3:1. Remove all the Windows nodes from your cluster.2. Uninstall GPFS 3.2.1.5 from your Windows nodes. This step is not necessary if you are

reinstalling Windows Server 2008 from scratch (next step below) and not upgrading fromServer 2003 R2.

3. Install Windows Server 2008 and the required prerequisites on the nodes.4. Install GPFS 3.3 on the Windows Server 2008 nodes.5. Migrate your AIX and Linux nodes from GPFS 3.2.1-5 or later, to GPFS V3.3.6. Add the Windows nodes back to your cluster.

Note: See the GPFS documentation at http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfsbooks.html for details onuninstalling, installing and migrating GPFS.

v Windows only supports the DEFAULT and AUTHONLY ciphers.v A DMAPI-enabled file systems may not be mounted on a Windows node.

Q2.8: What are the requirements for the use of OpenSSH on Windows nodes?A2.8: GPFS uses the SUA Community version of OpenSSH to support its administrative functions

when the cluster includes Windows nodes and UNIX nodes. Microsoft does not provide SSHsupport in the SUA Utilities and SDK, and the remote shell service included with SUA haslimitations that make it unsuitable for GPFS. Interop Systems Inc. hosts the SUA Community Website ( http://www.interopsystems.com/community/), which includes a forum and other helpfulresources related to SUA and Windows/UNIX interoperability. Interop Systems also providesSUA Add-on Bundles that include OpenSSH (http://www.suacommunity.com/tool_warehouse.aspx) and many other packages; however, IBM recommends installing only theSUA Community packages that your environment requires. The steps below outline a procedure

13

|||||||

Page 14: 2011.06.GPFS FAQ

for installing OpenSSH. This information could change at any time. Refer to the InteropCommunity Forums (http://www.suacommunity.com/forum/default.aspx) for the current andcomplete installation instructions:1. Download the Bootstrap Installer (6.0/x64) from Package Install Instructions

(http://www.suacommunity.com/pkg_install.htm) and install it on your Windows nodes.2. From an SUA shell, run pkg_update -L openssh3. Log on as root and run regpwd from an SUA shell.

Complete the procedure as noted in the GPFS Concepts, Planning, and Installation Guide underthe heading "Installing and configuring OpenSSH".

Q2.9: Can different GPFS maintenance levels coexist?A2.9: Certain levels of GPFS can coexist, that is, be active in the same cluster and simultaneously access

the same file system. This allows for rolling upgrades of GPFS nodes within a cluster. Further itallows the mounting of GPFS file systems from other GPFS clusters that may be running adifferent maintenance level of GPFS. The current maintenance level coexistence rules are:v All GPFS V3.4 maintenance levels can coexist with each other and with GPFS V3.3

Maintenance Levels, unless otherwise stated in this FAQ.v All GPFS V3.3 maintenance levels can coexist with each other and with GPFS V3.2

Maintenance Levels, unless otherwise stated in this FAQ.v All GPFS V3.2 maintenance levels can coexist with each other, unless otherwise stated in this

FAQ.See the Migration, coexistence and compatibility information in the GPFS V3.2 Concepts, Planning,and Installation Guide– The default file system version was incremented in GPFS 3.2.1-5. File systems created using

GPFS v3.2.1.5 code without using the --version option of the mmcrfs command will not bemountable by earlier code.

– GPFS V3.2 maintenance levels 3.2.1.2 and 3.2.1.3 have coexistence issues with othermaintenance levels.Customers using a mixed maintenance level cluster that have some nodes running 3.2.1.2 or3.2.1.3 and other nodes running other maintenance levels should uninstall thegpfs.msg.en_US rpm/fileset from the 3.2.1.2 and 3.2.1.3 nodes. This should prevent thewrong message format strings going across the mixed maintenance level nodes.

– Attention: Do not use the mmrepquota command if there are nodes in the cluster runninga mixture of 3.2.0.3 and other maintenance levels. A fix is provided in APAR #IZ16367.

Q2.10: Are there any requirements for Clustered NFS (CNFS) support in GPFS?A2.10: GPFS supports Clustered NFS (CNFS) on SLES 11, SLES 10, SLES 9, RHEL 5 and RHEL 4.

However there are limitations:v NFS v3 exclusive byte-range locking works properly only on clients of:

– x86-64 with SLES 10 SP2 or later, SLES 11, and RHEL 5.4– ppc64 with SLES 11 and RHEL 5.4

v Kernel patches are required for distributions prior to SLES 10 SP2 and RHEL 5.2:– If NLM locking is required, until the code is included in the kernel, a kernel patch for lockd

must be applied. This patch is currently available at http://sourceforge.net/tracker/?atid=719124&group_id=130828&func=browseThe required lockd patch is not supported on RHEL 4 ppc64.

– For SUSE distributions:- portmap must be installed on CNFS nodes- use of the common NFS utilities (sm-notify in user space) is required. The specific

patches required within util-linux are:v support statd notification by name (patch-10113) http://support.novell.com/techcenter/

psdb/2c7941abcdf7a155ecb86b309245e468.htmlv specify a host name for the -v option (patch-10852) http://support.novell.com/

techcenter/psdb/e6a5a6d9614d9475759cc0cd033571e8.htmlv allow selection of IP source address on command line (patch-9617)

http://support.novell.com/techcenter/psdb/c11e14914101b2debe30f242448e1f5d.html

14

Page 15: 2011.06.GPFS FAQ

– For Red Hat distributions, use of nfs-utils 1.0.7 is required for rpc.statd fixes. Please contactyour Red Hat support representative. Go to https://www.redhat.com/

Table 9. CNFS requirements

lockd patch required sm-notify required rpc.statd required

SLES 10 SP1 and prior X X not required

SLES 9 X X not required

RHEL 5.1 and prior X (not available for ppc64) included in basedistribution

X

RHEL 4 X (not available for ppc64) included in basedistribution

X

See also What Linux kernel patches are provided for clustered file systems such as GPFS?Q2.11: Does GPFS support NFS V4?A2.11: Enhancements to the support of Network File System (NFS) V4 in GPFS are available on

v AIX V5.3 systems with the minimum technology level of 5300-04 applied, AIX V6.1 or AIXV7.1.

v GPFS V3.3 and 3.4 support NFS V4 on the following Linux distributions:– RHEL 5.5– RHEL 6.0– SLES 11 SP1

Restrictions include:v Delegations must be disabled if a GPFS file system is exported over Linux/NFSv4 on RHEL5.2

by running echo 0 > /proc/sys/fs/leases-enable on the RHEL5.2 node. Other nodes cancontinue to grant delegations (for NFSv4) and/or oplocks (for CIFS). On all platforms, onlyread delegations are supported - there is no impact of this on applications.

v GPFS cNFS does not support NFSv4.v Windows-based NFSv4 clients are not supported with Linux/NFSv4 servers because of their

use of share modes.v If a file system is to be exported over NFSv4/Linux, then it must be configured to support

POSIX ACLs (with -k allor -k posix option). This is because NFSv4/Linux servers will onlyhandle ACLs properly if they are stored in GPFS as posix ACLs.

v SLES clients do not support NFSv4 ACLs.v Concurrent AIX/NFSv4 servers, Samba servers and GPFS Windows nodes in the cluster are

allowed. NFSv4 ACLs may be stored in GPFS filesystems via Samba exports, NFSv4/AIXservers, GPFS Windows nodes, ACL commands of Linux NFSv3 and ACL commands of GPFS.However, clients of Linux v4 servers will not be able to see these ACLs, just the permissionfrom the mode.

Table 10. Readiness of NFSv4 support on different Linux distros with some patches

Redhat 5.5 and 6.0 Sles 11 SP1

Byte-range locking Yes Yes

Read Delegation Yes Yes

ACLs Yes Yes as a server. No as a client.

For more information on the support of NFS V4, please see the GPFS documentation updates fileat http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfsbooks.html

15

Page 16: 2011.06.GPFS FAQ

Q2.12: Are there any requirements for the use of the Persistent Reserve support in GPFS?A2.12: GPFS support for Persistent Reserve on AIX requires:

v For GPFS V3.2 on AIX 5L™ V5.2, APAR IZ00673v For GPFS V3.2, V3.3, or V3.4 on AIX 5L V5.3, APARS IZ01534, IZ04114, and IZ60972v For GPFS V3.2, V3.3 , or V3.4 on AIX V6.1, APAR IZ57224

Q2.13: Are there any considerations when utilizing the Simple Network Management Protocol(SNMP)-based monitoring capability in GPFS?

A2.13: Considerations for the use of the SNMP-based monitoring capability in GPFS V3.2, V3.3 and V3.4include:v The SNMP collector node must be a Linux node in your GPFS cluster. GPFS utilizes Net-SNMP

which is not supported by AIX.v Support for ppc64 requires the use of Net-SNMP 5.4.1. Binaries for Net-SNMP 5.4.1 on ppc64

are not available. You will need to download the source and build the binary. Go tohttp://net-snmp.sourceforge.net/download.html

v If the monitored cluster is relatively large, you need to increase the communication time-outbetween the SNMP master agent and the GPFS SNMP subagent. In this context, a cluster isconsidered to be large if the number of nodes is greater than 25, or the number of file systemsis greater than 15, or the total number of disks in all file systems is greater than 50. For moreinformation see Configuring Net-SNMP in the GPFS: Advanced Administration Guide.

16

Page 17: 2011.06.GPFS FAQ

Machine questionsQ3.1: What are the minimum hardware requirements for a GPFS cluster?A3.1: The minimum hardware requirements are:

v GPFS for POWER:– GPFS for AIX on POWER is supported on the IBM POWER processors supported by your

level of AIX, with a minimum of 1 GB of system memory.– GPFS for Linux on POWER is supported on IBM POWER3, or higher, processors, with a

minimum of 1 GB of system memoryv GPFS for Linux on x86 Architecture:

– Intel Pentium 3 or newer processor, with 1 GB of memory– AMD Opteron™ processors, with 1 GB of memory

v GPFS for Windows on x86 Architecture:– Intel EM64T processors, with 1GB of memory– AMD Opteron processors, with 1 GB of memory

Note: Due to issues found during testing, GPFS for Windows is not supported on e325servers

v GPFS for Linux on Itanium Systems:– Intel Itanium 2 processor with 1 GB of memory

Note: GPFS for Linux on Itanium Servers is available only through a special ProgrammingRequest for Price Quotation (PRPQ). The install image is not generally available code.It must be requested by an IBM client representative through the RPQ system andapproved before order fulfillment. If interested in obtaining this PRPQ, referencePRPQ # P91232 or Product ID 5799-GPS.

Additionally, it is highly suggested that a sufficiently large amount of swap space is configured.While the actual configuration decisions should be made taking into account the memoryrequirements of other applications, it is suggested to configure at least as much swap space asthere is physical memory on a given node.

GPFS is supported on systems which are listed in, or compatible with, the IBM hardwarespecified in the Hardware requirements section of the Sales Manual for GPFS. If you are runningGPFS on hardware that is not listed in the Hardware Requirements, should problems arise and afterinvestigation it is found that the problem may be related to incompatibilities of the hardware, wemay require reproduction of the problem on a configuration conforming to IBM hardware listedin the sales manual.

To access the Sales Manual for GPFS:1. Go to http://www.ibm.com/common/ssi/index.wss2. Select your language preference and click Continue.3. From Type of content menu, choose HW&SW Desc (Sales Manual,RPQ) and click on the

right arrow.4. To view a GPFS sales manual, choose the corresponding product number to enter in the

product number fieldv For General Parallel File System for POWER, enter 5765-G66v For General Parallel File System x86 Architecture, enter the appropriate order number;

either 5724-N94 or 5765-XA35. Select Software product descriptions.6. Click on Search.7. See the Hardware Requirements section which is part of the Technical Description section.

17

Page 18: 2011.06.GPFS FAQ

Q3.2: Is GPFS for POWER supported on IBM System i servers?A3.2: GPFS for POWER extends all features, function, and restrictions (such as operating system and

scaling support) to IBM System i servers to match their IBM System p counterparts:

Table 11.

IBM System i IBM System p

i-595 p5-595

i-570 p5-570, p6-570

i-550 p5-550

i-520 p5-520

No service updates are required for this additional support.Q3.3: What machine models has GPFS for Linux been tested with?A3.3: GPFS has been tested with:

v IBM P7 750v IBM Power POWER6®

– 570– 575– 595

v IBM eServer p5 :For both the p5-590 and the p5-595: See the question What is the current service information forGPFS?– 510– 520– 550– 570– 575– 590– 595

v IBM eServer OpenPower servers:– 710– 720

v IBM POWER processor-based blade servers:– JS20– JS21– JS22

v IBM x86 xSeries machine models:– 330– 335– 336– 340– 342– 345– 346– 360– 365– 440– x3550– x3650– x3655

v IBM BladeCenter x86 blade servers:– HS20– HS21– HS40

18

Page 19: 2011.06.GPFS FAQ

– LS20– LS21

v IBM BladeCenter Cell/B.E.™ blade servers– QS21– QS22

v IBM AMD processor-based servers:– 325– 326

v IBM eServer pSeries® machines models that support Linuxv The IBM eServer Cluster 1300v The IBM System Cluster 1350

For hardware and software certification, please see the IBM ServerProven site athttp://www.ibm.com/servers/eserver/serverproven/compat/us/

Q3.4: On what servers is GPFS supported?A3.4:

1. GPFS for AIX is supported:a. with levels of AIX as listed in the question What levels of the AIX O/S are supported by

GPFS?b. on servers that meet the minimum hardware model requirements as listed in the question

What are the minimum hardware requirements for a GPFS cluster?2. GPFS for Linux on POWER is supported:

a. with the distributions and kernel levels as listed in the question What are the latestdistributions and kernel levels that GPFS has been tested with?

b. on servers that meet the minimum hardware model requirements as listed in the questionWhat are the minimum hardware requirements for a GPFS cluster?

3. GPFS for Linux on x86 Architecture is supported on all IBM ServerProven servers:a. with the distributions and kernel levels as listed in the question What are the latest

distributions and kernel levels that GPFS has been tested with?b. that meet the minimum hardware model requirements as listed in the question What are

the minimum hardware requirements for a GPFS cluster?

Please see the IBM ServerProven site at http://www.ibm.com/servers/eserver/serverproven/compat/us/

4. GPFS for Windows on x86 Architecture is supported on all IBM ServerProven servers:a. with the levels of Windows Server as listed in the question What levels of the Windows O/S

are supported by GPFS?b. on servers that meet the minimum hardware model requirements as listed in the question

What are the minimum hardware requirements for a GPFS cluster?Please see the IBMServerProven site at http://www.ibm.com/servers/eserver/serverproven/compat/us/

19

Page 20: 2011.06.GPFS FAQ

Q3.5: What interconnects are supported for GPFS daemon-to-daemon communication in a GPFScluster?

A3.5: The interconnect for GPFS daemon-to-daemon communication depends upon the types of nodesin your cluster.

Note: This table provides the list of communication interconnects which have been tested by IBMand are known to work with GPFS. Other interconnects may work with GPFS but theyhave not been tested by IBM. The GPFS support team will help customers who are usinginterconnects that have not been tested to solve problems directly related to GPFS, but willnot be responsible for solving problems deemed to be issues with the underlyingcommunication interconnect's behavior including any performance issues exhibited onuntested interconnects.

Table 12. GPFS daemon -to-daemon communication interconnects

Nodes in yourcluster

Supportedinterconnect Supported environments

Linux/AIX/Windows

Ethernet All supported GPFS environments

10-Gigabit Ethernet All supported GPFS environments

Infinband All supported GPFS environments

IP only

Linux Ethernet All supported GPFS environments

10-Gigabit Ethernet All supported GPFS environments

InfiniBand GPFS for Linux on x86 Architecture:v IP and optionally VERBS RDMA

Note: See the question Are there any considerations when utilizing the RemoteDirect Memory Access (RDMA) offered by InfiniBand?

GPFS for Linux on POWER:v IP only

Windows Ethernet All supported GPFS environments

10-Gigabit Ethernet All supported GPFS environments

Infinband All supported GPFS environments

IP only

AIX Ethernet All supported GPFS environments

10-Gigabit Ethernet All supported GPFS environments

InfiniBand All supported GPFS environments

IP only

eServer HPS Homogenous clusters of either:v AIX V5.2 (GPFS V3.2)v AIX V5.3 (GPFS V3.2 or V3.3)v AIX V6.1 on POWER 5 (GPFS V3.2 or V3.3)

Note: GPFS V3.3 was the last release to support the High PerformanceSwitch

20

Page 21: 2011.06.GPFS FAQ

Q3.6: Does GPFS support exploitation of the Virtual I/O Server (VIOS) features of POWERprocessors?

A3.6: Yes, GPFS allows exploitation of POWER VIOS configurations. N_Port ID Virtualization(NPIV),Virtual SCSI (VSCSI), LPM (Live Partition Mobility) and Shared Ethernet Adapter (SEA)are supported in single and multiple Central Electronics Complex (CEC) configurations. Thissupport is limited to GPFS nodes that are using the AIX V6.1 or V5.3 operating system or a Linuxdistribution that is supported by both VIOS (see www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html) and GPFS (see What levels of the AIX O/S are supported byGPFS? and What Linux distributions are supported by GPFS?).

Minimum required code levels for GPFS:v VIOS Version 2.1.1.0. For GPFS use with lower levels of VIOS, please contact [email protected] AIX 5L V5.3 Service Pack 5300-05-01v AIX V6.1v SLES 10 for POWERv RHEL 5 for POWER

There is no GPFS fix level requirement for this support, but it is recommended that you be at thelatest GPFS level available. For information on the latest levels, go to the GPFS page on FixCentral

For further information on POWER VIOS go towww14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html

For VIOS documentation, go totechsupport.services.ibm.com/server/vios/documentation/home.html

21

Page 22: 2011.06.GPFS FAQ

Disk questionsQ4.1: What disk hardware has GPFS been tested with?A4.1: This set of tables displays the set of disk hardware which has been tested by IBM and known to

work with GPFS. GPFS is not limited to only using this set of disk devices, as long as NSD diskleasing is used. Other disk devices may work with GPFS using NSD disk leasing, though theyhave not been tested by IBM. The GPFS support team will help customers who are using devicesoutside of this list of tested devices, using NSD disk leasing only, to solve problems directlyrelated to GPFS, but will not be responsible for solving problems deemed to be issues with theunderlying device's behavior including any performance issues exhibited on untested hardware.Untested devices should not be used with GPFS assuming SCSI-3 PR as the fencing mechanism,since our experience has shown that devices can not, in general, be assumed to support theSCSI-3 Persistent Reserve modes required by GPFS.

It is important to note that:v Each individual disk subsystem requires a specific set of device drivers for proper operation

while attached to a host running GPFS or IBM Recoverable Virtual Shared Disk. Theprerequisite levels of device drivers are not documented in this GPFS-specific FAQ. Refer to thedisk subsystem's web page to determine the currency of the device driver stack for the host'soperating system level and attachment configuration.For information on IBM disk storage subsystems and their related device drivers levels andOperating System support guidelines, go to www.ibm.com/servers/storage/support/disk/index.html

v Microcode levels should be at the latest levels available for your specific disk hardware.For the IBM System Storage®, go to www.ibm.com/servers/storage/support/allproducts/downloading.html

DS4000 customers: Please also seev The IBM TotalStorage DS4000 Best Practices and Performance Tuning Guide at

publib-b.boulder.ibm.com/abstracts/sg246363.html?Openv For the latest firmware and device driver support for DS4100 and DS4100 Express® Midrange

Disk System, go to http://www.ibm.com/systems/support/supportsite.wss/selectproduct?brandind=5000028&familyind=5329597&osind=0&oldbrand=5000028&oldfamily=5345919&oldtype=0&taskind=2&matrix=Y&psid=dm

v For the latest storage subsystem controller firmware support for DS4200, DS4700, DS4800, goto:– https://www.ibm.com/systems/support/supportsite.wss/docdisplay?lndocid=MIGR-

5075581&brandind=5000028– https://www.ibm.com/systems/support/supportsite.wss/docdisplay?lndocid=MIGR-

5073716&brandind=5000028

Table 13. Disk hardware tested with GPFS for AIX on POWER

GPFS for AIX on POWER:

IBM Storwize V7000AIX 5.3 with GPFS V3.3 and V3.4AIX 6.1 with GPFS V3.3 and V3.4

22

|

|||

Page 23: 2011.06.GPFS FAQ

Table 13. Disk hardware tested with GPFS for AIX on POWER (continued)

IBM XIV® 2810

Minimum Firmware Levels: 10.1, 10.2

This storage subsystem has been tested onv AIX 6.1v AIX5.3

For more information, directions and recommended settings for attachmentplease refer to the latest Host Attach Guide for Linux located at the IBM XIVStorage System Information Center go to

http://publib.boulder.ibm.com/infocenter/ibmxiv/r2/index.jsp

IBM System Storage DS3500Fibre Channel attached storage onlyAIX 5.3, 6.1 and 7.1 with GPFS V3.3 and V3.4

IBM System Storage DCS3700AIX 5.3, 6.1 and 7.1 with GPFS V3.3 and V3.4

IBM System Storage DS6000™ using either Subsystem Device Driver (SDD) orSubsystem Device Driver Path Control Module (SDDPCM)

Configuration considerations: GPFS clusters up to 32 nodes are supportedand require a firmware level of R9a.5b050318a or greater. See furtherrequirements below.

IBM System Storage DS8000® using either SDD or SDDPCM

Configuration considerations: GPFS clusters up to 32 nodes are supportedand require a firmware level of R10k.9b050406 or greater. See furtherrequirements below.

DS6000 and DS8000 service requirements:v AIX 5L V5.2 maintenance level 05 (5200-05) - APAR # IY68906, APAR #

IY70905v AIX 5L V5.3 maintenance level 02 (5300-02) - APAR # IY68966, APAR #

IY71085v GPFS for AIX 5L V2.3 - APAR # IY66584, APAR # IY70396, APAR # IY71901

For the Disk Leasing model install the latest supported version of the SDDor SDDPCM filesets supported on your operating system.

For the Persistent Reserve model install the latest supported version ofSDDPCM fileset supported for your operating system.

IBM TotalStorage DS4100 (Formerly FAStT 100) with DS4000 EXP100 StorageExpansion Unit with Serial Advanced Technology Attachment (SATA) drives

IBM TotalStorage FAStT500

IBM System Storage DS4200 Express all supported expansion drawer and disktypes

IBM System Storage DS4300 (Formerly FAStT 600) with DS4000 EXP710 FibreChannel (FC) Storage Expansion Unit, DS4000 EXP700 FC Storage ExpansionUnit, or EXP100

IBM System Storage DS4300 Turbo with EXP710, EXP700, or EXP100

IBM System Storage DS4400 (Formerly FAStT 700) with EXP710 or EXP700

IBM System Storage DS4500 (Formerly FAStT 900) with EXP710, EXP700, orEXP100

23

|

|||

|||

Page 24: 2011.06.GPFS FAQ

Table 13. Disk hardware tested with GPFS for AIX on POWER (continued)

IBM System Storage DS4700 Express all supported expansion drawer and disktypes

IBM System Storage DS4800 with EXP710, EXP100 or EXP810

IBM System Storage DS5000 all supported expansion drawer and disk typesincluding SSDThis includes models: DS5100, DS5300 and DS5020 Express.

on AIX V6.1 with a minimum level of TL2 with SP2 and APAR IZ49639on AIX V5.3 with a minimum level of TL9 with SP2 and APAR IZ52471

Firmware levels:7.60.28.007.50.13.007.36.17.00

IBM System Storage DS3400 (1726-HC4)

IBM TotalStorage ESS (2105-F20 or 2105-800 with SDD)

IBM TotalStorage ESS (2105-F20 or 2105-800 using AIX 5L Multi-Path I/O(MPIO) and SDDPCM))

IBM System Storage Storage Area Network (SAN) Volume Controller (SVC)V2.1 and V3.1

The following APAR numbers are suggested:v IY64709 - Applies to all GPFS clustersv IY64259 - Applies only when running GPFS in an AIX V5.2 or V5.3

environment with RVSD 4.1v IY42355 - Applies only when running GPFS in a PSSP V3.5 environmentv SVC V2.1.0.1 is supported with AIX 5L V5.2 (Maintenance Level 05) and

AIX 5L V5.3 (Maintenance Level 01).

See www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002471 forspecific advice on SAN Volume Controller recommended software levels.

Hitachi Lightning 9900™(9910, 9960, 9970 and 9980Hitachi Universal Storage Platform 100/600/1100

Notes:1. In all cases Hitachi Dynamic Link Manager™ (HDLM) (multipath software)

or MPIO (default PCM - failover only) isrequired2. AIX ODM objects supplied by Hitachi Data Systems (HDS) are required

for all above devices.3. Customers should consult with HDS to verify that their proposed

combination of the above components is supported by HDS.

EMC Symmetrix DMX Storage Subsystems (FC attach only)

Selected models of CX/CX-3 family including CX300, CX400, CX500 CX600,CX700 and CX3-20, CX3-40 and CX3-80

Device driver support for Symmetrix includes both MPIO and PowerPath.Note: CX/CX-3 requires PowerPath.

Customers should consult with EMC to verify that their proposedcombination of the above components is supported by EMC.

HP XP 128/1024 XP10000/12000HP StorageWorks Enterprise Virtual Arrays (EVA) 4000/6000/8000and 3000/5000 models that have bee upgraded to active-activeconfigurationsNote: HDLM multipath software is required

24

|

Page 25: 2011.06.GPFS FAQ

Table 13. Disk hardware tested with GPFS for AIX on POWER (continued)

IBM DCS9550 (either FC or SATA drives)FC attach onlyminimum firmware 3.08bmust use IBM supplied ODM objects at level 1.7 or greater

For more information on the DCS9550 go to http://www.datadirectnet.com/dcs9550/

IBM DCS9900 (either FC or SATA drives)FC attach only

For more information on the DCS9900 go to http://www.datadirectnet.com/dcs9900/

Table 14. Disk hardware tested with GPFS for Linux on x86 xSeries servers

GPFS for Linux on xSeries servers:

IBM Storwize V7000RHEL 5.1 with GPFS V3.2RHEL 5.2 with GPFS V3.2

IBM XIV 2810

Minimum Firmware Level: 10.0.1

This storage subsystem has been tested onv RHEL5.1 and greaterv SLES10.2

For more information, directions and recommended settings for attachmentplease refer to the latest Host Attach Guide for Linux located at the IBMXIV Storage System Information Center go to

http://publib.boulder.ibm.com/infocenter/ibmxiv/r2/index.jsp

IBM System Storage DS3500RHEL 6.x, 5.x and 4.8 with GPFS V3.3 and V3.4SLES 11, 10 with GPFS V3.3 and V3.4

IBM System Storage DCS3700RHEL 6.0, 5.6 and 5.5 with GPFS V3.3 and V3.4SLES 11.1, 10.4 and 10.3 with GPFS V3.3 and V3.4

IBM System Storage DS5000 all supported expansion drawer and disk typesincluding SSDThis include:Models: DS5100, DS5300 and DS5020 Express.Firmware levels:7.60.28.007.50.13.007.36.17.00

IBM TotalStorage FAStT 200 Storage Server

IBM TotalStorage FAStT 500

IBM TotalStorage DS4100 (Formerly FAStT 100) with EXP100

IBM System Storage DS4200 Express all supported expansion drawer anddisk types

IBM System Storage DS4300 (Formerly FAStT 600) with EXP710, EXP700, orEXP100

IBM System Storage DS4300 Turbo with EXP710, EXP700, or EXP100

25

||||

|

|||

|

|||

|

Page 26: 2011.06.GPFS FAQ

Table 14. Disk hardware tested with GPFS for Linux on x86 xSeries servers (continued)

GPFS for Linux on xSeries servers:

IBM System Storage DS4400 (Formerly FAStT 700) with EXP710 or EXP700

IBM System Storage DS4500 (Formerly FAStT 900) with EXP710, EXP700, orEXP100

IBM System Storage DS4700 Express all supported expansion drawer anddisk types

IBM System Storage DS4800 with EXP710, EXP100 or EXP810

IBM System Storage DS3400 (1726-HC4)

IBM TotalStorage Enterprise Storage Server® (ESS) models 2105-F20 and2105-800, with Subsystem Device Driver (SDD)

EMC Symmetrix Direct Matrix Architecture (DMX) Storage Subsystems1000 with PowerPath v 3.06 and v 3.07

IBM System Storage Storage Area Network (SAN) Volume Controller (SVC)V2.1 and V3.1

See www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002471 forspecific advice on SAN Volume Controller recommended software levels.

IBM DCS9550 (either FC or SATA drives)FC attach onlyminimum firmware 3.08bQLogic drivers at 8.01.07 or newer andIBM SAN Surfer V5.0.0 or newerhttp://support.qlogic.com/support/oem_detail_all.asp?oemid=376

For more information on the DCS9550 go tohttp://www.datadirectnet.com/dcs9550/

IBM DCS9900 (either FC or SATA drives)FC attach only

For more information on the DCS9900 go tohttp://www.datadirectnet.com/dcs9900/

Table 15. Disk hardware tested with GPFS for Linux on POWER

GPFS for Linux on POWER:

IBM System Storage DS3500RHEL 6.x, 5.x and 4.8 with GPFS V3.3 and V3.4SLES 11, 10 with GPFS V3.3 and V3.4

IBM System Storage DCS3700RHEL 6.0, 5.6 and 5.5 with GPFS V3.3 and V3.4SLES 11.1, 10.4 and 10.3 with GPFS V3.3 and V3.4

IBM System Storage DS4200 Express all supported expansion drawer anddisk types

IBM System Storage DS4300 (Formerly FAStT 600) all supported drawerand disk types

IBM System Storage DS4500 (Formerly FAStT 900) all supported expansiondrawer and disk types

IBM System Storage DS4700 Express all supported expansion drawer anddisk types

IBM System Storage DS4800 all supported expansion drawer and disk types

26

|

|||

|

|||

Page 27: 2011.06.GPFS FAQ

Table 15. Disk hardware tested with GPFS for Linux on POWER (continued)

GPFS for Linux on POWER:

IBM System Storage DS5000 all supported expansion drawer and disk typesincluding SSDThis include:Models: DS5100, DS5300 and DS5020 Express.Firmware levels:7.60.28.007.50.13.007.36.17.00

IBM System Storage DS8000 using SDD

Table 16. Disk hardware tested with GPFS for Linux on AMD processor-based servers

GPFS for Linux on eServer AMDprocessor-based servers: No devices tested specially in this environment.

Q4.2: What Fibre Channel Switches are qualified for GPFS usage and is there a FC Switch supportchart available?

A4.2: There are no special requirements for FC switches used by GPFS other than the switch must besupported by AIX or Linux. For further information see www.storage.ibm.com/ibmsan/index.html

Q4.3: Can I concurrently access SAN-attached disks from both AIX and Linux nodes in my GPFScluster?

A4.3: The architecture of GPFS allows both AIX and Linux hosts to concurrently access the same set ofLUNs. However, before this is implemented in a GPFS cluster you must ensure that the disksubsystem being used supports both AIX and Linux concurrently accessing LUNs. While theGPFS architecture allows this, the underlying disk subsystem may not, and in that case, aconfiguration attempting it would not be supported.

Q4.4: What disk support failover models does GPFS support for the IBM System Storage DS4000family of storage controllers with the Linux operating system?

A4.4: GPFS has been tested with both the Host Bus Adapter Failover and Redundant Dual ActiveController (RDAC) device drivers.

To download the current device drivers for your disk subsystem, please go tohttp://www.ibm.com/servers/storage/support/

Q4.5: What devices have been tested with SCSI-3 Persistent Reservations?A4.5: The following devices have been tested with SCSI-3 Persistent Reservations:

v DS5000 using SDDPCM or the default AIX PCM on AIX.v DS8000 (all 2105 and 2107 models) using SDDPCM or the default AIX PCM on AIX

Users of SDDPCM will need to contact SDDPCM support for temporary fixes:– SDDPCM v2209 - temporary fix for SDDPCM v 220x– SDDPCM v2404 - temporary fix for SDDPCM v 240x

v DS4000 subsystems using the IBM RDAC driver on AIX. (devices.fcp.disk.array.rte)

The most recent versions of the device drivers are always recommended to avoid problems thathave been addressed.

Note: For a device to properly offer SCSI-3 Persistent Reservation support for GPFS, it mustsupport SCSI-3 PERSISTENT RESERVE IN with a service action of REPORTCAPABILITIES. The REPORT CAPABILITIES must indicate support for a reservationtype of Write Exclusive All Registrants. Contact the disk vendor to determine thesecapabilities.

27

|

Page 28: 2011.06.GPFS FAQ

Scaling questionsQ5.1: What are the GPFS cluster size limits?A5.1: The current maximum tested GPFS cluster size limits are:

Table 17. GPFS maximum tested cluster sizes

GPFS for Linux on x86 Architecture 3794 nodes

GPFS for AIX 1530 nodes

GPFS for Windows on x86 Architecture 64 Windows nodes

GPFS for Linux on x86 Architecture and GPFS for AIX 3906 (3794 Linux nodes and 112 AIX nodes)

Notes:1. Contact [email protected] if you intend to exceed

v Configurations with Linux nodes exceeding 512 nodesv Configurations with AIX nodes exceeding 128 nodesv Configurations with Windows nodes exceeding 64 nodes

Although GPFS is typically targeted for a cluster with multiple nodes, it can also provide highperformance benefit for a single node so there is no lower limit. However, there are two points toconsider:v GPFS is a well-proven, scalable cluster file system. For a given I/O configuration, typically

multiple nodes are required to saturate the aggregate file system performance capability. If theaggregate performance of the I/O subsystem is the bottleneck, then GPFS can help achieve theaggregate performance even on a single node.

v GPFS is a highly available file system. Therefore, customers who are interested in single-nodeGPFS often end up deploying a multi-node GPFS cluster to ensure availability.1

Q5.2: What is the current limit on the number of nodes that may concurrently join a cluster?A5.2: The total number of nodes that may concurrently join a cluster is limited to a maximum of 8192

nodes.

A node joins a given cluster if it is:v A member of the local GPFS cluster (the mmlscluster command output displays the local

cluster nodes).v A node in a different GPFS cluster that is mounting a file system from the local cluster.

For example:v GPFS clusterA has 2100 member nodes as listed in the mmlscluster command.v 500 nodes from clusterB are mounting a file system owned by clusterA.

In this example clusterA therefore has 2600 concurrent nodes.Q5.3: What are the current file system size limits?A5.3: The current file system size limits are:

Table 18. Current file system size limits

GPFS 2.3 or later, file system architectural limit 2^99 bytes

GPFS 2.2 file system architectural limit 2^51 bytes (2 Petabytes)

Current tested limit approximately 4 PB

Q5.4: What is the current limit on the number of mounted file systems in a GPFS cluster?A5.4: The current limit on the number of mounted file systems in a GPFS cluster is 256.

1. GPFS Sequential Input/Output Performance on IBM pSeries 690, Gautam Shah, James Wang available at http://www.redbooks.ibm.com/redpapers/pdfs/redp3945.pdf

28

Page 29: 2011.06.GPFS FAQ

Q5.5: What is the architectural limit of the number of files in a file system?A5.5: The architectural limit of the number of files in a file system is determined by the file system

format:v For file system created with GPFS V3.4 or later, the architectural limit is 264.

The current tested limit is 4,000,000,000.v For file systems created with GPFS V2.3 or later, the limit is 2,147,483,648.v For file systems created prior to GPFS V2.3, the limit is 268,435,456.

Please note that the effective limit on the number of files in a file system is usually lower thanthe architectural limit, and could be adjusted using the mmchfs command (GPFS V3.4 and lateruse the --inode-limit option; GPFS V3.3 and lower use the -F option).

Q5.6: What are the limitations on GPFS disk size?A5.6: The maximum disk size supported by GPFS depends on the file system format and the

underlying device support. For file systems created prior to GPFS version 2.3, the maximum disksize is 1 TB due to internal GPFS file system format limitations. For file systems created withGPFS 2.3 or later, these limitations have been removed, and the maximum disk size is onlylimited by the OS kernel and device driver support:

Table 19. Maximum disk size supported

OS kernel Maximum supported GPFS disk size

AIX, 64-bit kernel >2TB, up to the device driver limit

AIX, 32-bit kernel 1TB

Linux 2.6 64-bit kernels >2TB, up to the device driver limit

Linux 32-bit kernels (built without CONFIG_LBD) 2TB

Notes:1. The above limits are only applicable to nodes that access disk devices through a local block

device interface, as opposed to NSD protocol. For NSD clients, the maximum disk size is onlylimited by the NSD server large disk support capability, irrespective of the kernel running onan NSD client node.

2. The basic reason for the significance of the 2TB disk size barrier is that this is the maximumdisk size that can be addressed using 32-bit sector numbers and 512-byte sector size. A largerdisk can be addressed either by using 64-bit sector numbers or by using larger sector size.GPFS uses 64-bit sector numbers to implement large disk support. Disk sector sizes other than512 bytes are unsupported.

3. GPFS for Windows versions prior to GPFS V3.4 can only operate as NSD clients, and as suchdo not support direct attached disks. For direct attached disks you must be at a minimumlevel of GPFS V3.4.

29

Page 30: 2011.06.GPFS FAQ

Q5.7: What is the limit on the maximum number of groups a user can be a member of whenaccessing a GPFS file system?

A5.7: Each user may be a member of one or more groups, and the list of groups IDs (GIDs) that thecurrent user belongs to is a part of the process environment. This list is used when performingaccess checking during I/O operations. Due to architectural constraints, GPFS code does notaccess the GID list directly from the process environment (kernel memory), and instead makes acopy of the list, and imposes a limit on the maximum number of GIDs that may be smaller thanthe corresponding limit in the host operating system. The maximum number of GIDs supportedby GPFS depends on the platform and the version of GPFS code. Note that the GID list includesthe user primary group and supplemental groups.

Table 20. Maximum number of GIDs supported

Platform Maximum number of GIDs supported

AIX 128

Linux with 4K page size (all supported platforms exceptthe two below)

1,0201

Linux with 16K page size (IA64 platform) 4,0921

Linux with 64K page size (PPC64/RHEL5 platform) 16,3801

Windows Windows OS limit (no limit in GPFS code)

1. The maximum number of GIDs supported on all Linux platforms in all versions of GPFSprior to 3.2.1.12 is 32. Starting with GPFS 3.2.1.12, the maximum number of GIDssupported has been increased, and the semantics of the GID limit check have been changed:the current code will fail any I/O request made by a process exceeding the limit (in priorversions, access would be allowed, but only the first 32 GIDs from the list would be usedduring access checks).

30

Page 31: 2011.06.GPFS FAQ

Configuration and tuning questionsPlease also see the questions:v What are the current advisories for all platforms supported by GPFS?v What are the current advisories for GPFS on AIX?v What are the current advisories for GPFS on Linux?v What are the current advisories for GPFS on Windows?

Q6.1: What specific configuration and performance tuning suggestions are there?A6.1: In addition to the configuration and performance tuning suggestions in the GPFS: Concepts,

Planning, and Installation Guide for your version of GPFS:v If your GPFS cluster is configured to use SSH/SCP, it is suggested that you increase the value

of MaxStartups in sshd_config to at least 1024.v You must ensure that when you are designating nodes for use by GPFS you specify a

non-aliased interface. Utilization of aliased interfaces may produce undesired results. Whencreating or adding nodes to your cluster, the specified hostname or IP address must refer to thecommunications adapter over which the GPFS daemons communicate. When specifying serversfor your NSDs, the output of the mmlscluster command lists the hostname and IP addresscombinations recognized by GPFS. Utilizing an aliased hostname not listed in the mmlsclustercommand output may produce undesired results.

v If your system consists of the eServer pSeries High Performance Switch, it is suggested thatyou configure GPFS over the ml0 IP network interface.

v On systems running with the Linux 2.6 kernel, it is recommended you adjust thevm.min_free_kbytes kernel tunable. This tunable controls the amount of free memory thatLinux kernel keeps available (i.e. not used in any kernel caches). When vm.min_free_kbytes isset to its default value, on some configurations it is possible to encounter memory exhaustionsymptoms when free memory should in fact be available. Setting vm.min_free_kbytes to ahigher value (Linux sysctl utility could be used for this purpose), on the order of magnitude of5-6% of the total amount of physical memory, should help to avoid such a situation.Also, please see the GPFS Redpapers:– GPFS Sequential Input/Output Performance on IBM pSeries 690 at www.redbooks.ibm.com/

redpapers/pdfs/redp3945.pdf– Native GPFS Benchmarks in an Integrated p690/AIX and x335/Linux Environment at

www.redbooks.ibm.com/redpapers/pdfs/redp3962.pdf– Sequential I/O performance of GPFS on HS20 blades and IBM System Storage DS4800 at

ftp://ftp.software.ibm.com/common/ssi/rep_wh/n/CLW03002USEN/CLW03002USEN.PDFQ6.2: What configuration and performance tuning suggestions are there for GPFS when used

primarily for Oracle databases?A6.2:

Notes:1. Only a subset of GPFS releases are certified for use in Oracle environments. For the latest

status of GPFS certification:v For AIX go to, http://www.oracle.com/technetwork/database/clustering/tech-generic-

unix-new-166583.htmlv For Linux go to, http://www.oracle.com/technetwork/database/clustering/tech-generic-

linux-new-086754.html2. For the list of virtualization and partitioning technologies supported by Oracle go to,

http://www.oracle.com/technetwork/database/virtualizationmatrix-172995.html

In addition to the performance tuning suggestions in the GPFS: Concepts, Planning, and InstallationGuide for your version of GPFS:v When running Oracle RAC 10g, it is suggested you increase the value for

OPROCD_DEFAULT_MARGIN to at least 500 to avoid possible random reboots of nodes.In the control script for the Oracle CSS daemon, located in /etc/init.cssd the value forOPROCD_DEFAULT_MARGIN is set to 500 (milliseconds) on all UNIX derivatives except for

31

Page 32: 2011.06.GPFS FAQ

AIX. For AIX this value is set to 100. From a GPFS perspective, even 500 milliseconds maybetoo low in situations where node failover may take up to a minute or two to resolve. However,if during node failure the surviving node is already doing direct IO to the oprocd control file,it should have the necessary tokens and indirect block cached and should therefore not have towait during failover.

v Using the IBM General Parallel File System is attractive for RAC environments becauseexecutables, trace files and archive log files are accessible on all nodes. However, care must betaken to properly configure the system in order to prevent false node evictions, and tomaintain the ability to perform rolling upgrades of the Oracle software. Without properconfiguration GPFS recovery from a node failure can interfere with cluster managementoperations resulting in additional node failures.If you are running GPFS and Oracle RAC 10gR2 and encounter false node evictions:– Upgrade the CRS to 10.2.0.3 or newer.

The Oracle 10g Clusterware (CRS) executables or logs (the CRS_HOME) should be placed ona local JFS2 filesystem. Using GPFS for the CRS_HOME can inhibit CRS functionality on thesurviving nodes while GPFS is recovering from a failed node for the following reasons:- In Oracle 10gR2, up to and including 10.2.0.3, critical CRS daemon executables are not

pinned in memory. Oracle and IBM are working to improve this in future releases of10gR2.

- Delays in updating the CRS log and authorization files while GPFS is recovering caninterfere with CRS operations.

- Due to an Oracle 10g limitation rolling upgrades of the CRS are not possible when theCRS_HOME is on a shared filesystem.

– CSS voting disks and the Oracle Clusterware Registry (OCR) should not be placed on GPFSas the IO freeze during GPFS reconfiguration can lead to node eviction, and the inability ofCRS to function. Place the OCR and Voting disk on shared raw devices (hdisks).

– Oracle Database 10g (RDBMS) executables are supported on GPFS for Oracle RAC 10g.However, the system should be configured to support multiple ORACLE_HOME’s so as tomaintain the ability to perform rolling patch application. Rolling patch application issupported for the ORACLE_HOME starting in Oracle RAC 10.2.0.3.

– Oracle Database 10g data files, trace files, and archive log files are supported on GPFS.

See also:v Deploying Oracle 10g RAC on AIX V5 with GPFS at http://www.redbooks.ibm.com/redbooks/

pdfs/sg247541.pdfv Deploying Oracle9i RAC on eServer Cluster 1600 with GPFS at http://www.redbooks.ibm.com/

abstracts/sg246954.html?Openv An Oracle 9i RAC Implementation over GPFS at http://www.redbooks.ibm.com/abstracts/

tips0263.html?OpenQ6.3: Are there any considerations when utilizing the Remote Direct Memory Access (RDMA)

offered by InfiniBand?A6.3: GPFS for Linux on x86 Architecture supports Infiniband RDMA in the following configurations:

Notes:1. Ensure you are at the latest firmware level for both your switch and adapter.2. See the question What are the current advisories for GPFS on Linux?v SLES 10 or SLES 11 and RHEL 5 or RHEL 6, x86_64v OFED Infiniband Stack VERBS API – GEN 2

– OFED 1.2, 1.2.5, 1.3, 1.4, 1.5, and 1.5.2– OFED 1.1 – Voltaire Gridstack only

v Mellanox based adapters– RDMA over multiple HCAs/Ports/QPs– For multiple ports - GPFS balances load across ports

v Single IB subnet– QPs connected via GPFS RPC

32

|

Page 33: 2011.06.GPFS FAQ

v RDMA support for Mellanox memfree adapters requires GPFS V3.2.0.2, or later, to operatecorrectly

Q6.4: What Linux configuration settings are required when NFS exporting a GPFS filesystem?A6.4: If you are running at SLES 9 SP 1, the kernel defines the sysctl variable

fs.nfs.use_underlying_lock_ops that determines if the NFS lockd is to consult the file systemwhen granting advisory byte-range locks. For distributed file systems like GPFS, this must be setto true (the default is false).

You can query the current setting by issuing the command:sysctl fs.nfs.use_underlying_lock_ops

Alternatively, the record fs.nfs.use_underlying_lock_ops = 1 may be added to /etc/sysctl.conf.This record must be applied after initially booting the node and after each reboot by issuing thecommand:sysctl -p

As the fs.nfs.use_underlying_lock_ops variable is currently not available in SLES 9 SP2 or later,when NFS exporting a GPFS file system ensure your NFS server nodes are at the SP1 level (untilsuch time the variable is made available in later service packs).

For additional considerations when NFS exporting your GPFS file system, see the:v GPFS: Administration Guide chapter on Managing GPFS access control lists and NFS exportv GPFS: Concepts, Planning, and Installation Guide chapter Planning for GPFS on File system creation

considerations.Q6.5: Sometimes GPFS appears to be handling a heavy I/O load, for no apparent reason. What could

be causing this?A6.5: On some Linux distributions the system is configured by default to run the file system indexing

utility updatedb through the cron daemon on a periodic basis (usually daily). This utilitytraverses the file hierarchy and generates a rather extensive amount of I/O load. For this reason,it is configured by default to skip certain file system types and nonessential file systems.However, the default configuration does not prevent updatedb from traversing GPFS file systems.

In a cluster this results in multiple instances of updatedb traversing the same GPFS file systemsimultaneously. This causes general file system activity and lock contention in proportion to thenumber of nodes in the cluster. On smaller clusters, this may result in a relatively short-livedspike of activity, while on larger clusters, depending on the overall system throughput capability,the period of heavy load may last longer. Usually the file system manager node will be thebusiest, and GPFS would appear sluggish on all nodes. Re-configuring the system to either makeupdatedb skip all GPFS file systems or only index GPFS files on one node in the cluster isnecessary to avoid this problem.

Q6.6: What considerations are there when using IBM Tivoli Storage Manager with GPFS?A6.6: Considerations when using Tivoli Storage Manager (TSM) with GPFS include:

v When using TSM with GPFS, please verify the supported environments:– IBM Tivoli Storage Manager Requirements for IBM AIX Client at

http://www-01.ibm.com/support/docview.wss?uid=swg21243309#client_aixpart– IBM Tivoli Storage Manager Linux x86 Client Requirements at

http://www-01.ibm.com/support/docview.wss?uid=swg21243309#client_x86linux– To search TSM support information go to

www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.htmland enter GPFS as the search term

v Quota limits are not enforced when files are recalled from the backup using TSM. This isbecause dsmrecall is invoked by the root user who has no allocation restrictions according tothe UNIX semantics.

33

Page 34: 2011.06.GPFS FAQ

Q6.7: How do I get OpenSSL to work on AIX with GPFS?

A6.7: To help enhance the security of mounts using Secure Sockets Layer (SSL) a working version ofOpenSSL must be installed. This version must be compiled with support for the Secure HashAlgorithm (SHA).v On AIX V5.2 or later, the supported versions of OpenSSL libraries are available at

https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp.v The minimum supported versions of openssl.base are 0.9.8.411 and 0.9.8.601 (versions 0.9.8.40

and 0.9.8.41 are known not to work with GPFS). Additionally,1. GPFS configuration needs to be changed to point at the right set of libraries:

– On 64-bit kernel:mmchconfig openssllibname="/usr/lib/libssl.a(libssl64.so.0.9.8)" -N AffectedNodes

– On 32-bit kernel:mmchconfig openssllibname="/usr/lib/libssl.a(libssl.so.0.9.8)" -N AffectedNodes

On AIX V5.1, OpenSSL 0.9.7d-2, or later, as distributed by IBM in the AIX Toolbox for LinuxApplications, is supported. To download OpenSSL from the AIX Toolbox for LinuxApplications:1. Go to http://www-03.ibm.com/systems/p/os/aix/linux/toolbox/download.html2. Under Sorted download, click on AIX Toolbox Cryptographic Content.3. Either register for an IBM ID or sign-in.4. To view the license agreement, click on View license.5. By clicking I agree you agree that you have had the opportunity to review the terms and

conditions and that such terms and conditions govern this transaction.6. Scroll down to OpenSSL -- SSL Cryptographic Libraries

7. Ensure you download 0.9.7d-2 or laterQ6.8: What ciphers are supported for use by GPFS?A6.8: You can specify any of the RSA based ciphers that are supported by the OpenSSL version

installed on the node. Refer to the ciphers(1) man page for a list of the valid cipher strings andtheir meaning. Use the openssl ciphers command to display the list of available ciphers:openssl ciphers RSA

In addition, GPFS supports the keywords DEFAULT and AUTHONLY. When AUTHONLY isspecified in place of a cipher list, GPFS checks network connection authorization. However, datasent over the connection is not protected. When DEFAULT is specified, GPFS does notauthenticate or check authorization for network connections. GPFS on Windows only supportsthe keywords DEFAULT and AUTHONLY.

Note: When different versions of OpenSSL are used within a cluster or in a multi-cluster setup,ensure that the ciphers are supported by all versions.

34

Page 35: 2011.06.GPFS FAQ

Q6.9: When I allow other clusters to mount my file systems, is there a way to restrict accesspermissions for the root user?

A6.9: Yes. A root squash option is available when making a file system available for mounting by otherclusters using the mmauth command. This option is similar to the NFS root squash option. Whenenabled, it causes GPFS to squash superuser authority on accesses to the affected file system onnodes in remote clusters.

This is accomplished by remapping the credentials: user id (UID) and group id (GID) of the rootuser, to a UID and GID specified by the system administrator on the home cluster, for example,the UID and GID of the user nobody. In effect, root squashing makes the root user on remotenodes access the file system as a non-privileged user.

Although enabling root squash is similar in spirit to setting up UID remapping (seewww.ibm.com/servers/eserver/clusters/whitepapers/uid_gpfs.html), there are two importantdifferences:1. While enabling UID remapping on remote nodes is an option available to the remote system

administrator, root squashing need only be enabled on the local cluster, and it will beenforced on remote nodes.

2. While UID remapping requires having an external infrastructure for mapping between localnames and globally unique names, no such infrastructure is necessary for enabling rootsquashing.

When both UID remapping and root squashing are enabled, root squashing overrides the normalUID remapping mechanism for the root user. See the mmauth command man page for furtherdetails.

Q6.10: How do I determine the maximum size of the extended attributes allowed in my file system?

A6.10: As of GPFS 3.4, the space allowed for extended attributes for each file was increased and theperformance to get and set the extended attributes was improved. To determine which version ofextended attribute your file system uses, issue the mmlsfs --fastea command. If the new fastexternal extended attributes are enabled, yes will be displayed on the command output. In thiscase, the total space for user-specified extended attributes has a limit of 50K out of 64K and thesize of each extended attribute has a limit of 16K, otherwise the total space limit is 8K out of 16Kand the size of each extended attribute has a limit of 1022 bytes.

For additional information, please see the GPFS V3.4 Advanced Administration Guide andAdministration Guide

35

||

||||||||

||

Page 36: 2011.06.GPFS FAQ

Service questionsQ7.1: What support services are available for GPFS?A7.1: Support services for GPFS include:

v GPFS forums– The GPFS technical discussion forum at www.ibm.com/developerworks/forums/

dw_forum.jsp?forum=479&cat=13 will help answer your questions on installing and runningGPFS.

– For the latest announcements and news regarding GPFS please subscribe to the GPFSAnnounce Forum at http://www.ibm.com/developerworks/forums/forum.jspa?forumID=1606 .

v Service bulletins for pSeries, p5, and OpenPower servers at www14.software.ibm.com/webapp/set2/subscriptions/pqvcmjd1. Sign in with your IBM ID.2. Under the Bulletins tab:

– For the Select a heading option, choose Cluster on POWER.– For the Select a topic option, choose General Parallel File System.– For the Select a month option, select a particular month or choose to All months.

v IBM Global Services - Support Line for LinuxA 24x7 enterprise-level remote support for problem resolution and defect support for majordistributions of the Linux operating system. Go to www.ibm.com/services/us/index.wss/so/its/a1000030.

v IBM Systems and Technology Group Lab ServicesIBM Systems and Technology Group (STG) Lab Services can help you optimize the utilizationof your data center and system solutions.STG Lab Services has the knowledge and deep skills to support you through the entireinformation technology race. Focused on the delivery of new technologies and niche offerings,STG Lab Services collaborates with IBM Global Services and IBM Business Partners to providecomplementary services that will help lead through the turns and curves to keep your businessrunning at top speed.Go to http://www.ibm.com/systems/services/labservices/.

v GPFS software maintenanceGPFS defect resolution for current holders of IBM software maintenance contracts:– In the United States contact us toll free at 1-800-IBM-SERV (1-800-426-7378)– In other countries, contact your local IBM Service Center

Contact [email protected] for all other services or consultation on what service is best for yoursituation.

Q7.2: How do I download fixes for GPFS?A7.2: To download fixes for GPFS, go to the GPFS page on Fix Central

Note: Please note the fix download site has moved. Please update your bookmarks accordingly.Q7.3: What are the current advisories for all platforms supported by GPFS?A7.3: The current general advisories are:

v A fix introduced in GPFS 3.3.0-11 and in GPFS 3.4.0-3 changed the returned buffer size for fileattributes to include additional available information, affecting the TSM incremental backupprocess due to the selection criteria used by TSM. As a result of this buffer size change, TSMincremental backup will treat all previously backed up files as modified, causing the dsmcincremental backup process to initiate new backups of all previously backed up files. If the filesystem being backed up is HSM managed, this new backup can result in recall of all fileswhich have been previously backed up. This effect is limited to files backed up using TSMincremental backup; there are no known effects on files backed up using either GPFSmmbackup or the TSM selective backup process.This issue is resolved in GPFS 3.3.0-12 (APAR IZ92779) and GPFS 3.4.0-4 (APAR IZ90535).Customers using the TSM Backup/Archive client to do incremental backup (via dsmc

36

Page 37: 2011.06.GPFS FAQ

incremental command) should not apply GPFS 3.3.0-11 or GPFS 3.4.0-3, but should wait to applyGPFS 3.3.0-12 or GPFS 3.4.0-4. Any customer using TSM incremental backup and needing fixesin GPFS 3.3.0-11 or 3.4.0-3 should apply an ifix containing the corresponding APAR beforeexecuting dsmc incremental backup using these PTF levels, to avoid the additional file backupoverhead, and (in the case of HSM-managed file systems) the potential for large scale recallscaused by the backup. Please contact IBM service to obtain the ifix, or to discuss yourindividual situation.

v When installing or migrating GPFS, the minimum levels of service you must have applied are:– GPFS V3.4 you must apply APAR IZ78460 (GPFSV3.4.0-1)– GPFS V3.3 you must apply APAR IZ45231 (GPFS V3.3.0-1)– GPFS V3.2 you must apply APAR IY99639 (GPFS V3.2.0-1)If you do not apply these levels of service and you attempt to start GPFS, you will receive anerror message similar to:mmstartup: Required service not applied. Install GPFS 3.2.0.1 or latermmstartup: Command failed Examine previous error messages to determine cause

v GPFS 3.3.0.5 and GPFS 3.2.1.19 service level updates (go to Fix Central ) contain the followingfixes (among others):– During internal testing, a rare but potentially serious problem has been discovered in GPFS.

Under certain conditions, a read from a cached block in the GPFS pagepool may returnincorrect data which is not detected by GPFS. The issue is corrected in GPFS 3.3.0.5 (APARIZ70396) and GPFS 3.2.1.19 (APAR IZ72671). All prior versions of GPFS are affected.The issue has been discovered during internal testing, where an MPI-IO application wasemployed to generate a synthetic workload. IBM is not aware of any occurrences of thisissue in customer environments or under any other circumstances. Since the issue is specificto accessing cached data, it does not affect applications using DirectIO (the IO mechanismthat bypasses file system cache, used primarily by databases, such as DB2® or Oracle).This issue is limited to the following conditions:1. The workload consists of a mixture of writes and reads, to file offsets that do not fall on

the GPFS file system block boundaries;2. The IO pattern is a mixture of sequential and random accesses to the same set of blocks,

with the random accesses occurring on offsets not aligned on the file system blockboundaries; and

3. The active set of data blocks is small enough to fit entirely in the GPFS pagepool.The issue is caused by a race between an application IO thread doing a read from a partiallyfilled block (such a block may be created by an earlier write to an odd offset within theblock), and a GPFS prefetch thread trying to convert the same block into a fully filled one,by reading in the missing data, in anticipation of a future full-block read. Due to insufficientsynchronization between the two threads, the application reader thread may read data thathad been partially overwritten with the content found at a different offset within the sameblock. The issue is transient in nature: the next read from the same location will returncorrect data. The issue is limited to a single node; other nodes reading from the same filewould be unaffected.

v For GPFS V3.3, use of multiple servers is restricted for file systems that were backed up usingthe mmbackup command with GPFS V3.2 or earlier until a full backup is performed with theGPFS V3.3 version of the mmbackup command. After the full backup is performed, additionalservers may be added.

Q7.4: What are the current advisories for GPFS on AIX?A7.4: The current AIX-specific advisories are:

v GPFS V3.4.0-2, V3.3.0-10, 3.2.1-24, or later levels, support AIX V7.1.v In order for tracing to function properly on a system running the levels of AIX listed below,

appropriate service must be installed. If you are running GPFS without the appropriate servicelevel installed and have AIX tracing enabled (such as by using the GPFS mmtracectlcommand), you will experience a GPFS memory fault (coredump) or node crash with kernelpanic.

37

Page 38: 2011.06.GPFS FAQ

– AIX V7.1 with the 7100-00 Technology Level, you must either install AIX 7100-00-02 ServicePack or open a PMR to obtain an iFix for APAR IZ84576 from IBM Service.

– AIX V6.1 with the 6100-06 Technology Level, you must either install AIX 6100-06-02 ServicePack or open a PMR to obtain an iFix for APAR IZ84729 from IBM Service.

v GPFS V3.3 is the last release to support IBM Virtual Shared Disk, data shipping mode and32-bit AIX kernels.

v For GPFS V3.2 or V3.3 use with AIX V6.1:– GPFS is supported in a Ethernet/10-Gigabit Ethernet environment, see the question What

interconnects are supported for GPFS daemon-to-daemon communication in my GPFS cluster?– The versions of OpenSSL shipped as part of the AIX Expansion Pack, 0.9.8.4 and 0.9.8.41,

ARE NOT compatible with GPFS due to the way the OpenSSL libraries are built. To obtainthe level of OpenSSL which will work with GPFS, see the question How do I get OpenSSL towork on AIX?

– Role Based Access Control (RBAC) is not supported by GPFS and is disabled by default.– Workload Partitions (WPARs) or storage protection keys are not exploited by GPFS.

v IBM testing has revealed that some customers with the General Parallel File System who installAIX 5L Version 5.2 with the 5200-04 Recommended Maintenance package (bos.mp64 at the5.2.0.40 or 5.2.0.41 levels) and execute programs which reside in GPFS storage may experiencea system wide hang due to a change in the AIX 5L loader. This hang is characterized by aninability to login to the system and an inability to complete some GPFS operations on othernodes. This problem is fixed with the AIX 5L APAR IY60609. It is suggested that all customersinstalling the bos.mp64 fileset at the 5.2.0.40 or 5.2.0.41 level, who run GPFS, immediatelyinstall this APAR.

v When running GPFS on either a p5-590 or a p5-595:– The minimum GFW (system firmware) level required is SF222_081 (GA3 SP2), or later.

For the latest firmware versions, see the IBM Technical Support atwww14.software.ibm.com/webapp/set2/firmware/gjsn

– The supported Linux distribution is SUSE Linux ES 9.– Scaling is limited to 16 total processors.

v IBM testing has revealed that some customers using the Gigabit Ethernet PCI-X adapters withthe jumbo frames option enabled may be exposed to a potential data error. While receivingpacket data, the Gigabit Ethernet PCI-X adapter may generate an erroneous DMA addresswhen crossing a 64 KB boundary, causing a portion of the current packet and the previouslyreceived packet to be corrupted.These Gigabit Ethernet PCI-X adapters and integrated Gigabit Ethernet PCI-X controllers couldpotentially experience this issue:– Type 5700, Gigabit Ethernet-SX PCI-X adapter (Feature Code 5700)– Type 5701, 10/100/1000 Base-TX Ethernet PCI-X Adapter (Feature code 5701)– Type 5706, Dual Port 10/100/1000 Base-TX Ethernet PCI-X Adapter (Feature code 5706)– Type 5707, Dual Port Gigabit Ethernet-SX PCI-X Adapter (Feature code 5707)– Integrated 10/100/1000 Base-TX Ethernet PCI-X controller on machine type 7029-6C3 and

6E3 (p615)– Integrated Dual Port 10/100/1000 Base-TX Ethernet PCI-X controller on machine type

9111-520 (p520)– Integrated Dual Port 10/100/1000 Base-TX Ethernet PCI-X controller on machine type

9113-550 (p550)– Integrated Dual Port 10/100/1000 Base-TX Ethernet PCI-X controller on machine type

9117-570 (p570)This problem is fixed with:– For AIX 5L 5.2, APAR IY64531– For AIX 5L 5.3, APAR IY64393

38

Page 39: 2011.06.GPFS FAQ

Q7.5: What are the current advisories for GPFS on Linux?A7.5: The current Linux-specific advisories are:

v Upgrading GPFS to a new major release on Linux:When migrating to a new major release of GPFS (for example, GPFS 3.2 to GPFS 3.3), thesupported migration path is to install the GPFS base images for the new release, then applyany required service updates. GPFS will not work correctly if you use rpm -U command toupgrade directly to a service level of a new major release without installing the base imagesfirst. If this should happen you must uninstall and then reinstall the gpfs.base package.Note: Upgrading to the GPFS 3.2.1.0 level from a pre-3.2 level of GPFS does not work correctly,and the same workaround is required.

v On Linux kernels 2.6.30 or later, or on RHEL5.4 (2.6.18-164.11.1.el5), fasync_helper uses a newlock field created in the file structure. GPFS support of these kernel levels requires at aminimum, installation of GPFS V3.4.0-1, GPFS 3.3.0-5, or GPFS 3.2.1-20. Please see thedownload site for the latest PTF Fix Central .

RHEL specificv When installing either the GPFS 3.3 base RPMs or a GPFS fix pack on RHEL 6, a symbolic link

/usr/bin/ksh to /bin/ksh is required to satisfy the /usr/bin/ksh dependency.v GPFS does not currently support the Transparent Huge Page (THP) feature available in RHEL

6.0. This support should be disabled at boot time by appending transparent_hugepage=neverto the kernel boot options.

v IBM testing has revealed intermittent problems when executing GPFS administrationcommands on Red Hat nodes running ksh-20060214-1.7. These problems are caused by kshsetting erroneous return codes which lead to premature termination of the GPFS commands. Ifyou currently have this version of ksh, you are advised to upgrade to a newer version.

v If you get errors on RHEL5 when trying to run GPFS self-extractor archive from the installationmedia, please run export _POSIX2_VERSION=199209 first.

SLES specificv Due to changes in the system utilities shipped in SLES11 SP1, GPFS self-installer images found

on the GPFS installation CD will not run correctly on that platform. The electronic CD imagesfor GPFS V3.3 and V3.4 have been updated to allow a successful installation. Please use theupdated images found:– GPFS for Linux on Power, please log into the Entitled Software page at:

https://www-05.ibm.com/servers/eserver/ess/OpenServlet.wss– GPFS for x86_64 and x86, please log into Passport Advantage at: http://www.ibm.com/

software/lotus/passportadvantage/

Note: For new Linux on Power customers in the United States, after you have placed yourorder for the GPFS media, you will need to log onto the Entitled Software page andthrough your Software Maintenance Agreement (SWMA) download the updated GPFSLoP binaries.

v On SLES 11, GPFS automount may not perform as expected. It is suggested that you migrate toSLES 11 SP1 which is supported with GPFS V3.3.0-7 and GPFS V3.4.0-1, or later.

v Required service for support of SLES 10 includes:1. The GPFS required level of Korn shell for SLES 10 support is version ksh-93r-12.16 or later

and is available in SLES 10 SP1 or later.2. For SLES 10 on POWER, /etc/init.d/running-kernel shipped prior to the availability of the

SLES 10 SP1 kernel source rpm contains a bug that results in the wrong set of files beingcopied to the kernel source tree. Ensure you upgrade to SP1 or SP2.

RDMA specificv Customers who enable GPFS RDMA on Linux x86_64 with GPFS 3.4 may experience I/O

failures with an error 733 or 735 reported in syslog. Customers should contact IBM Service foran efix for APAR IZ88828 until GPFS 3.4.0-3 is available.

39

||

||||||||

||||

||

Page 40: 2011.06.GPFS FAQ

v Currently with GPFS for Linux on x86 Architecture V3.2.1-7 and lower, with Infiniband RDMAenabled, an issue exists which under certain conditions may cause data corruption. This isfixed in GPFS 3.2.1-8. Please apply 3.2.1-8 or turn RDMA off.

Q7.6: What are the current advisories for GPFS on Windows?A7.6: The current Windows-specific advisories are:

v Required Windows hotfix updates for GPFS consist of :– Windows Server 2008 R2 SP1 does not require any hotfixes.– For Windows Server 2008 R2:

- KB article 978048 at http://support.microsoft.com/kb/978048.– For Windows Server 2008:

- There are currently no hotfix updates required for Windows Server 2008 SP2. All requiredupdates are contained in SP2. Ensure you are running with that level.

– For Windows Server 2003 R2:- KB article 956548 at http://support.microsoft.com/kb/956548/en-us ;only the hotfix for

Windows Server 2003 (Fix243497) is required.- KB article 950098 at http://support.microsoft.com/kb/950098/en-us.

Q7.7: What Linux kernel patches are provided for clustered file systems such as GPFS?A7.7: The Linux kernel patches provided for clustered file systems are expected to correct problems

that may be encountered when using GPFS with the Linux operating system. The suppliedpatches are currently being submitted to the Linux development community but may not beavailable in particular kernels. It is therefore suggested that they be appropriately applied basedon your kernel version and distribution.

A listing of the latest patches, along with a more complete description of these patches, can befound at the General Parallel File System project on SourceForge®.net at sourceforge.net/tracker/?atid=719124&group_id=130828&func=browse:1. Click on the Summary description for the desired patch.2. Scroll down to the Summary section on the patch page for a description of and the status of

the patch.3. To download a patch:

a. Scroll down to the Attached Files section.b. Click on the Download link for your distribution and kernel level.

Q7.8: Where can I find the GPFS Software License Agreement?A7.8: GPFS licensing information may be viewed at http://www.ibm.com/software/sla/sladb.nsf . To

search for a specific program license agreement:v For GPFS on POWER, enter 5765-G66v For GPFS on x86 Architecture enter the appropriate order number; either 5724-N94 or

5765-XA3Q7.9: Where can I find End of Market (EOM) and End of Service (EOS) information for GPFS?A7.9: GPFS follows the Standard IBM Support Lifecycle Policy as described at https://www.ibm.com/

software/support/lifecycle/lc-policy.html which includes:v Provide a minimum of 3 years of product technical support beginning at the planned

availability date of the version/release of the product.v Ensure support is available for all IBM components of a product or until the product or bundle

is withdrawn from support. In addition, all components of a bundle have a common End ofService date.

v Publish a notice of support discontinuance ("End of Service") for a product at least 12 monthsprior to the effective date.

v Align the effective date of support discontinuance ("End of Service") to occur on common dateseither in the months of April or September.

v Make product support extensions available, where possible, that are designed to allowmigration to the current release to be completed. For additional information on producttechnical support extensions beyond the three-year minimum period, contact your IBMrepresentative. For details see the announcement letter USA Ann# 203-204 effective August 8,2003.

Current applicable announcement letter information includes:

40

|

Page 41: 2011.06.GPFS FAQ

v http://www-01.ibm.com/common/ssi/rep_ca/3/897/ENUS910-243/ENUS910-243.PDFv http://www-01.ibm.com/common/ssi/rep_ca/0/897/ENUS910-210/ENUS910-210.PDF

Announced product EOM and EOS dates are available from the:v IBM Software Support Lifecycle page at http://www-01.ibm.com/software/support/lifecycle/

index_g.htmlv GPFS Sales Manual:

1. Go to http://www.ibm.com/common/ssi/index.wss2. Select your language preference and click Continue.3. From Type of content menu, choose HW&SW Desc (Sales Manual,RPQ) and click on the

right arrow.4. To view a GPFS sales manual, choose the corresponding product number to enter in the

product number field– For General Parallel File System for POWER, enter 5765-G66– For General Parallel File System x86 Architecture, enter the appropriate order number;

either 5724-N94 or 5765-XA3v Cluster Software Ordering Guide at http://www.ibm.com/systems/clusters/software/reports/

order_guide.htmlQ7.10: Where can I locate GPFS code to upgrade from my current level of GPFS?A7.10: If you have Software Maintenance Agreement (SWMA) for your products ordered through

AAS/eConfig or IBM Subscription and Support (S&S) for orders placed through PassportAdvantage, you may log into the respective systems and upgrade your level of GPFS:v For products ordered through AAS/eConfig, please log into the Entitled Software page at:

https://www-05.ibm.com/servers/eserver/ess/OpenServlet.wssv For products ordered through Passport Advantage, please log into the site at:

http://www.ibm.com/software/lotus/passportadvantage/

Information regarding SWMA and S&S can be found in the Cluster Software Ordering Guide athttp://www.ibm.com/systems/clusters/software/reports/order_guide.htmlv

For licensed products ordered via AAS/WTAAS and eConfig systems (GPFS on POWER5765-G66 and GPFS on x86 Architecture 5765-XA3): SWMA provides product support andentitlement with each license. The first year is included in the license with an additionaltwo-year registration option. One-year and three-year Renewal and Maintenance-After-Licenselicense options are also available.For licensed products ordered via Passport Advantage and Passport Advantage Express (GPFSon x86 Architecture 5724-N94): S&S provides product support and entitlement with eachproduct license. The first year is included in the license, with one-year renewal andreinstatement options available.For complete information, please see the Ordering Guide.

You may also access the Announcement Letters and Sales Manuals for GPFS:1. Go to http://www.ibm.com/common/ssi/index.wss2. Select your language preference and click Continue.3. From Type of content menu, choose either Announcement letter or HW&SW Desc (Sales

Manual,RPQ) and click on the right arrow.4. To view a GPFS information, choose the corresponding product number to enter in the

product number fieldv For General Parallel File System for POWER, enter 5765-G66v For General Parallel File System x86 Architecture, enter the appropriate order number;

either 5724-N94 or 5765-XA3

41

|||||||

Page 42: 2011.06.GPFS FAQ

Q7.11: Are there any items that will no longer be supported in GPFS?A7.11: GPFS V3.4 is the last release to support:

v 32-bit kernelsv AIX V5.3

GPFS V3.3 is the last release to support:v IBM Virtual Shared Disk

With GPFS V3.3 and later, new file systems must be created utilizing network shared disksonly. IBM Virtual Shared Disks are not supported on new file systems. Please see the Concepts,Planning, and Installation Guide for details on creating Network Shared Disks.

v Data shipping modev 32-bit AIX kernelsv The High Performance Switchv The GPFS GUI

If you are looking for a more integrated file serving solution based on GPFS that includes GUIbased management tools, you should consider the use of these IBM offerings:– IBM Scale Out Network Attached Storage (http://www-03.ibm.com/systems/storage/

network/sonas/)– IBM Smart Business Storage Cloud (http://www-935.ibm.com/services/us/index.wss/

offering/its/a1031610)

42

Page 43: 2011.06.GPFS FAQ

Notices

This information was developed for products andservices offered in the U.S.A.

IBM may not offer the products, services, orfeatures discussed in this document in othercountries. Consult your local IBM representativefor information on the products and servicescurrently available in your area. Any reference toan IBM product, program, or service is notintended to state or imply that only IBM'sproduct, program, or service may be used. Anyfunctionally equivalent product, program, orservice that does not infringe any of IBM'sintellectual property rights may be used instead.However, it is the user's responsibility to evaluateand verify the operation of any non-IBM product,program, or service.

IBM may have patents or pending patentapplications covering subject matter described inthis document. The furnishing of this documentdoes not grant you any license to these patents.You can send license inquiries, in writing, to:

IBM Director of LicensingIBM CorporationNorth Castle DriveArmonk, NY 10594-1785USA

For license inquiries regarding double-byte(DBCS) information, contact the IBM IntellectualProperty Department in your country or sendinquiries, in writing, to:

IBM World Trade Asia CorporationLicensing2-31 Roppongi 3-chome, Minato-kuTokyo 106-0032, Japan

The following paragraph does not apply to theUnited Kingdom or any other country where suchprovisions are inconsistent with local law:

INTERNATIONAL BUSINESS MACHINESCORPORATION PROVIDES THIS PUBLICATION“AS IS” WITHOUT WARRANTY OF ANY KIND,EITHER EXPRESS OR IMPLIED, INCLUDING,BUT NOT LIMITED TO, THE IMPLIEDWARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR APARTICULAR PURPOSE. Some states do not

allow disclaimer of express or implied warrantiesin certain transactions, therefore, this statementmay not apply to you.

This information could include technicalinaccuracies or typographical errors. Changes areperiodically made to the information herein; thesechanges will be incorporated in new editions ofthe publication. IBM may make improvementsand/or changes in the product(s) and/or theprogram(s) described in this publication at anytime without notice.

Any references in this information to non-IBMWeb sites are provided for convenience only anddo not in any manner serve as an endorsement ofthose Web sites. The materials at those Web sitesare not part of the materials for this IBM productand use of those Web sites is at your own risk.

IBM may use or distribute any of the informationyou supply in any way it believes appropriatewithout incurring any obligation to you.

Licensees of this program who wish to haveinformation about it for the purpose of enabling:(i) the exchange of information betweenindependently created programs and otherprograms (including this one) and (ii) the mutualuse of the information which has been exchanged,should contact:

IBM CorporationIntellectual Property Law2455 South Road,P386Poughkeepsie, NY 12601-5400USA

Such information may be available, subject toappropriate terms and conditions, including insome cases, payment of a fee.

The licensed program described in this documentand all licensed material available for it areprovided by IBM under terms of the IBMCustomer Agreement, IBM International ProgramLicense Agreement or any equivalent agreementbetween us.

Any performance data contained herein wasdetermined in a controlled environment.Therefore, the results obtained in other operatingenvironments may vary significantly. Some

43

Page 44: 2011.06.GPFS FAQ

measurements may have been made ondevelopment-level systems and there is noguarantee that these measurements will be thesame on generally available systems.Furthermore, some measurements may have beenestimated through extrapolation. Actual resultsmay vary. Users of this document should verifythe applicable data for their specific environment.

This information contains examples of data andreports used in daily business operations. Toillustrate them as completely as possible, theexamples include the names of individuals,companies, brands, and products. All of thesenames are fictitious and any similarity to thenames and addresses used by an actual businessenterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample applicationprograms in source language, which illustratesprogramming techniques on various operatingplatforms. You may copy, modify, and distributethese sample programs in any form withoutpayment to IBM, for the purposes of developing,using, marketing or distributing applicationprograms conforming to the applicationprogramming interface for the operating platformfor which the sample programs are written. Theseexamples have not been thoroughly tested underall conditions. IBM, therefore, cannot guarantee orimply reliability, serviceability, or function ofthese programs.

If you are viewing this information softcopy, thephotographs and color illustrations may notappear.

TrademarksIBM, the IBM logo, and ibm.com® are trademarksor registered trademarks of International BusinessMachines Corporation in the United States, othercountries, or both. If these and other IBMtrademarked terms are marked on their first

occurrence in this information with a trademarksymbol ( ® or ™), these symbols indicate U.S.registered or common law trademarks owned byIBM at the time this information was published.Such trademarks may also be registered orcommon law trademarks in other countries. Acurrent list of IBM trademarks is available on theWeb at Copyright and trademark information atwww.ibm.com/legal/copytrade.shtml

Cell Broadband Engine is a trademark of SonyComputer Entertainment, Inc. in the UnitedStates, other countries, or both and is used underlicense therefrom

Intel, Intel logo, Intel Inside, Intel Inside logo,Intel Centrino, Intel Centrino logo, Celeron, IntelXeon, Intel SpeedStep, Itanium, and Pentium aretrademarks or registered trademarks of IntelCorporation or its subsidiaries in the UnitedStates and other countries.

Java and all Java-based trademarks and logos aretrademarks or registered trademarks of SunMicrosystems, Inc. in the United States, othercountries, or both.

Linux is a registered trademark of Linus Torvaldsin the United States, other countries, or both.

Red Hat, the Red Hat "Shadow Man" logo, and allRed Hat-based trademarks and logos aretrademarks or registered trademarks of Red Hat,Inc., in the United States and other countries.

UNIX is a registered trademark of the OpenGroup in the United States and other countries.

Microsoft, Windows, Windows NT, and theWindows logo are registered trademarks ofMicrosoft Corporation in the United States, othercountries, or both.

Other company, product, and service names maybe the trademarks or service marks of others.

June 2011

© Copyright IBM Corporation 2004,2011.US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contractwith IBM Corp.