115
IBM Systems Enterprise Architecture GPFS Solution 1 Advanced GPFS Solution Design on Cross Platform Infrastructure Residency Date: 2009/10/26 (5Days) Modfy Start: 2009/12/15 Last Update: 2010/04/29 Modify by [email protected] , Review by [email protected] IBM System x Technical Sales Team

04.AdvGPFS Cross Platform

Embed Size (px)

Citation preview

Page 1: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

1

Advanced GPFS Solution Design on Cross Platform Infrastructure

Residency Date: 2009/10/26 (5Days) Modfy Start: 2009/12/15 Last Update: 2010/04/29 Modify by [email protected] , Review by [email protected] IBM System x Technical Sales Team

Page 2: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

2

This is one of the business partner skill enhancement programs is Residency in Korea. At that time joined 17 companies for this residency. First, I want to say “Thanks join this program”. This documentation wrote by partner engineer. Just, I translate to English from Korean. Before starting this residency program, a lot of team was preparing to help from education dept, technical sales manager and system admin team. Not easy to prepare demo system for this program, But support team makes all of the demo system such as system p6 system, storage box and BladeCenter System. I can assure all of the attendee that gains a lot of configuration experience and technical knowledge. This is very helpful program for our business partner. Demo System Description:

System p7 570 2EA DS Storage 4EA BladeCenter 2EA BladeServer 12EA San Switch 2EA Network Switch 4EA

Business Partner Residency Program is one of the education programs in Korea. Usually, the locations setup the out side from Seoul, such as YangPyung or ChungPyung. They will stay 5 days in the resort. And then starting instruction focused topic. After base education, then starting test system and write result documents. It is irregular education program, because this kind of topic was setup by team discussion and change requirement from BP every year. One of the rules is one time execution by topic.

Ojective this residency program, at this time topic is Advanced GPFS Solution design on cross

platform. Recently, customer requirement does not configure single platform for GPFS Solution. They are want to mixed configuration such as Linux, pLinux, AIX and Windows. And Cross mount function is mount remote area file system for collaboration. They are wanted to know what limitation configuration for mixed cluster is consideration point and how configuration storage box for optimized performance.

Page 3: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

3

Page 4: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

4

Index

1. Preparing Hardware _________________________________________________________ 5

2. Installation Redhat Enterprise Linux Server v5.4 x64 Version ______________________ 6

3. Installation Windows 2008 R2 x64 Enterprise Version _____________________________ 9

4. Preparing VIOS Client ______________________________________________________ 11

5. Configuration NIM Server ___________________________________________________ 16

6. Make VIO Client Logical Volume ______________________________________________ 21

7. Installation AIX on Partition _________________________________________________ 24

8. Configuration Storage System (DS3400) _______________________________________ 31

9. Configuration Storage System and Initilize Volume each OS (DS4300) ______________ 36

10. SAN Switch Congiruation Guide ____________________________________________ 43

11. Pre Installation GPFS - SSH Keygen ________________________________________ 50

12. AIX, Linux GPFS Server Installation _________________________________________ 52

13. Make Cluster and Configure GPFS Solution __________________________________ 57

14. pLinux GPFS Client Installation ____________________________________________ 62

15. Windows 2008 SP2 GPFS Client Installation __________________________________ 76

16. Rolling Upgrade to v3.3 from v3.2 __________________________________________ 97

17. Add / Remove NSD – GPFS Maintanence ___________________________________ 101

18. Cross over GPFS Mount _________________________________________________ 107

19. Failure group and GPFS Replication _______________________________________ 111

20. End of This bp residency _________________________________________________ 114

Page 5: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

5

1. Preparing Hardware

This is node configuration for assigned residency team. Each team use same configuration hardware system. System p6 570 9117-MMA 50% Use (Partition) BladeCenter HS21 3Nodes San Switch 1EA Network Switch 1EA Storage DS 3k or 4k 1EA

Before Configuration and OS Installation, must be check below list. All of the below list is very important for configuration gpfs, because getting more stability and high performance. I recommand use latest version of system firmware and driver.

Name (Firmware) Version Name (Software) Version

System BIOS (System p,x) AIX Version and Patch Level

Internal Disk Firmware Linux Version and Update Level

Onboard Network Firmware Onboard Network Drv for Linux

AMM Firmware for BCH Multipath Driver for Linux (RDAC)

Ethernet Switch Module for BCH HBA Driver for Linux

External Swtich Firmare Windows Version (Only 64Bit)

SAN Switch Module for BCH Onboard Network Drv for Windows

External San Switch Firmware Multipath Driver for Windows

SAN Switch ID (License) Storage Manager

Storage Controller Firmware IBM GPFS Software

Disk Firmware for Storage CIFS Server

Page 6: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

6

2. Installation Redhat Enterprise Linux Server v5.4 x64 Version

Boot RHEL v5.4 Media Skip media test

Choose Language Choose keyboard

/boot 100M, Swap 4095MB, / 50G

Page 7: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

7

Setup the Boot loader Configuration IP, Hostname

Setup timezone Setup Root password

Choose package Start Installation Pkg

You must include development package for build GPFS portable layer on first node at least. Usually, First step of installation is build GPFS Portble Layer on first installed system, and then you can use “make rpm” this is make rpm version of portable layer.

Page 8: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

8

Start Installation

Post Installation step. 1. Disable unused daemon 2. Must be off Selinux function and reboot system 3. Stop iptables daemon or configuration firewall for GPFS Daemon need 22, 1191 target port 4. Update Network Drive

Page 9: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

9

3. Installation Windows 2008 R2 x64 Enterprise Version

Choose Language and Location Choose version of Windows 2008 R2 Enterprise (Full Version)

Start Installation Rebooting

Complete Installation

Page 10: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

10

After Installation and Login Screen

At that time, Team2 and Team3 member was trying to installation GPFS v3.3 with Windows 2008 R2 System. They was can not configuration this operating system. Finally, they are reinstallation Windows2008 SP2 on System. Refer GPFS v3.3 Document what is current version of GPFS v3.3 support Windows2008 SP2 Only. It is many differences between 2008 and 2008 R2 core. Windows 2008 R2 Core system has based on Windows7.

According to WW GPFS development team, GPFS v3.4 will support with Windows 2008R2 Server System, This product announce plan is 2h 2011. And this version will support Windows Base GPFS Server System. Current version (v3.3) support GPFS Client Side only. In other words must be configuring mixed cluster Linux and Windows.

Page 11: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

11

4. Preparing VIOS Client

Usually, before installation AIX on System, you must config the partition on p570 system.

Create Logical partition

Input the partition ID and Name

Input the profile name

Page 12: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

12

Choose type of allocation processor resource.

Input Processor resource information.

Choice type of memory resource

Page 13: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

13

Input Memory Size

Choose PCI Adapter (Network and HBA)

For make virtual SCSI Adapter, Click drop down menu, Action Create SCSI Adapter. Then no

problem use default SCSI ID. At this time, important thing is assign which virtual system or partition. And then target partition need to choice adapter ID. On the both of the server and client parition makes vscsi device is possible. And then you decide mapping ID.

Page 14: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

14

Virtual Adapter configuration Previsouly, Configured virtual SCSI adapter on server and client assign both of them.

Applied virtual SCSI Adappter

Not Use Logical Host Ethernet Adapters (Just Click Next)

Page 15: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

15

Choice same option and next

You can see summary table about the partition

This is complete make a logical partition on systme p6 570.

Page 16: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

16

5. Configuration NIM Server

After complete build logical patition, then setup NIM Server and Client.

Connect NIM Server and edit /etc/hosts. This ip and host name will use vio client side information.

Smitty nim Perform NIM Administration Tasks.

Choice Manage Machines

Page 17: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

17

Choice Define a Machine

Input NIM Client Hostname

Choice Cable Type

Page 18: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

18

Back to the NIM Main Menu and Choice Perform NIM Software Insatallation and Maintenance Tasks.

Choice Install and Update Software

Choice Install the Base Operating System on Standalone Clients

Page 19: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

19

Select Target system

Already prepare system image by mksysb backup. So, client target system will installation mksysb image OS.

Select mksysb Image version.

Page 20: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

20

Select Installation spot

ACCEPT new license agreements field must be yes, and Initiate reboot and installation now field must be no. If this field set to yes, then NIM Server reboot when is complete installation.

Page 21: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

21

6. Make VIO Client Logical Volume

Before start installation progress, need to configure logical volume and then assign target volume for installation OS. When is connect to VIO Server, Not use root account. This is recommentation.

Defatult Account Information = ID/padmin, PASSWD/padmin

When use padmin account. It is limit of right So, you must change of authority for use oem_setup_env or license –accept command, then you are no limit for use admin command.

Add a logical volume for VIO Client. You can use smitty lv and then choose Add a Logical Volume.

Page 22: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

22

Choose volume group for add LV, and then choice rootvg. Input lnformation for LV

Page 23: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

23

You can check assign status of logical volume via “lsgv –l rootvg”

Page 24: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

24

7. Installation AIX on Partition

This step is configuration for network boot, Choice SMS Menu. Choice Setup Remote IPL

Page 25: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

25

Choice ethernet device Choice ip range

Page 26: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

26

Choice BOOTP This menu is setup ip address on NIC Adapter. It is loading mksysb image from NIM server.

Page 27: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

27

Complete setup ip address then exit this menu. This is boot screen pxe client via NIM Server.

Page 28: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

28

Choice 1 Choice 1

Page 29: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

29

Choice 2, copy from the mksysb image. Do not change any other option for copy from the image.

Page 30: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

30

This screen is showing installation progress. AIX Installation Complete

Page 31: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

31

8. Configuration Storage System (DS3400)

The first setp is download latest version of storage manager and installation.

Check status of initialize HBA card

Page 32: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

32

Make Host group and HOST Modify Host topology

Page 33: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

33

Check define host type Volume Mapping

Page 34: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

34

Volume Mapping Status

DS3400 Firmware Update

Page 35: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

35

Disk Firmware Update

Almost storage systems are similar progress for attached server and storage. 1. Hardware Configuration and Complete Cabling 2. San switch Configuration such as domain ID and some kind of timeout value 3. Setup the volume configuration for recommend GPFS File system 4. Host type and group configuration 5. Volume mapping 6. HBA Driver update and installation on each server system 7. Check Volume on each system.

Page 36: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

36

9. Configuration Storage System and Initilize Volume each OS (DS4300)

Check current firmware level of installed system box. 2009/10/26 Latest Firmware Level 06.60.22.00

Stoage Volume Configuration and SAN Swithc Firmware Update.

Page 37: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

37

p1:/#>lscfg -v -l fcs0 | grep Net Network Address.............10000000C963E03A p1:/#>lscfg -v -l fcs1 | grep Net Network Address.............10000000C963E03B p2:/#>lscfg -v -l fcs1 | grep Net Network Address.............10000000C967B415 p2:/#>lscfg -v -l fcs0 | grep Net Network Address.............10000000C967B416

Check WWN on AIX Server for configuration zoning.

Check WWN on Linux Server for Installation Qlogic HBA CLI Command.

Page 38: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

38

Configure Zoning on SAN Switch. Volume Assign to each node system.

Page 39: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

39

Check Initilize Volume on AIX Server

Page 40: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

40

Install RDAC on Linux server.

Page 41: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

41

Boot Loader Configuration

Page 42: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

42

Check Initialized Volume on Linux Box.

Page 43: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

43

10. SAN Switch Congiruation Guide

This is key point what configuration for multi SAN Switch Infra. The recommend configuration of SAN Switch is same vendor and Fabric OS. Choose one vendor such as Brocade Company. If you want to attache heterogenouse SAN Switch, must refer to interoperatibility guide. Under same switch of vendor, you must check domain ID and time out value.

Check List 1. SAN Switch Domain ID 2. Timeout Value of SAN Switch.

A. R_A_TOV = 10 seconds ( The setting is 10000 ) B. E_D_TOV = 2 seconds ( The setting is 2000 )

3. ISL License for SAN Switch

Page 44: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

44

If same ID of each SAN Swtcih, Set to disable on external SAN Switch. And then apply ISL license

on switch and change Domain ID, after set to enable on external SAN Switch. Enabled Extented Fabric (ISL License)

Page 45: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

45

When is configured Zoning, Delete All of Zone Configuration, And then Configuration ISL on IBM Bladecenter SAN Switch Module. I think that factory default setting, this is easy. Connect to 192.168.70.129 via HTTP. Be Careful. Before ISL Configuration, remove external cable. If not remove external cable, confict Domain ID on both SAN Switch.

Page 46: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

46

Click Admin. And then Check Domain ID.

Set to Disable.

Page 47: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

47

Change 3 from 1 and Apply.

Page 48: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

48

Check Changed Domain ID. Configure Zone on Each Management Interface.

Page 49: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

49

Configure Zone.

Page 50: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

50

11. Pre Installation GPFS - SSH Keygen

Configure HOST info

There are running all of the GPFS server and client side.

t1:/#>ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (//.ssh/id_rsa): Created directory '//.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in //.ssh/id_rsa. Your public key has been saved in //.ssh/id_rsa.pub. The key fingerprint is: bd:40:09:86:b7:89:a9:ae:40:a7:ed:51:3d:ae:18:7c root@T1 t1:/#>cd /.ssh t1:/.ssh#>cp -rp id_rsa.pub authorized_keys t1:/.ssh#>ls authorized_keys id_rsa id_rsa.pub t1:/.ssh#>cat authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA5nZUpuqDXCgQ5OEp1GzD5PTH0qjZufrLbUWPPMsfYVPBJsRxAyTQIDluaYQXVz+pCer4p87/HZNenqI9kgf9tJHC9RPhPLZxjyUauVgADvCmkzHm1TbKltwwnjawhZ1Oj8gY2FEhZPhSf7YEp5ysrNLQvR12li8VosDSSRuqNp3nBS5G5PYmMB0h0OGO48ZxB3Gf6R3QUZqaoX4SZl9SinG8lF5sze9x8t/l0GKBQ3RtcHBjx7iHdSrOaETEaFhco/1QLcjBPtSKK7jT4FDi7dD0XEHN4k0B5IdJYtx2Nl6Y6g1a5SpnTTm5n0QKe2buznMgD0TmML1PaaXnNDIUbw== root@t1 t2:/#>ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (//.ssh/id_rsa): Created directory '//.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in //.ssh/id_rsa. Your public key has been saved in //.ssh/id_rsa.pub. The key fingerprint is: 19:33:82:5c:15:e5:60:fb:f2:8b:ce:50:5c:2d:03:6d root@T2 t2:/#>ls id_rsa id_rsa.pub t2:/.ssh#>cp -rp id_rsa.pub authorized_keys t2:/.ssh#>cat authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAusPjMndj2JRzHaseb7/9/d8AdOsvtDBr8pZIQ/Aac48F/2iepmuogJjdxohbCYSSRjfTz35No+hNuLpYZpgvS/2+uco9dXnHZv7HJV+4rdwTREqJplLKZvPMrBNEkKLkHiP1NJ3hq5bHeMEDyCKt/LYGcwl/VN3+nGXcJ2b5lsE= root@T1

Page 51: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

51

t2:/.ssh#> t1:/.ssh#>scp id_rsa.pub t2_gpfs:/home The authenticity of host 't2_gpfs (10.10.10.2)' can't be established. RSA key fingerprint is 0b:01:ad:da:58:5d:eb:40:71:f9:40:c3:d1:a0:8e:14. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 't2_gpfs,10.10.10.2' (RSA) to the list of known hosts. root@t2_gpfs's password: id_rsa.pub 100% 389 0.4KB/s 00:00 t2:/.ssh#>scp id_rsa.pub t1_gpfs:/home The authenticity of host 't1_gpfs (10.10.10.1)' can't be established. RSA key fingerprint is 40:ff:29:0b:fb:b6:68:79:ee:5c:63:b5:ab:b9:f7:f2. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 't1_gpfs,10.10.10.1' (RSA) to the list of known hosts. root@t1_gpfs's password: id_rsa.pub 100% 389 0.4KB/s 00:00 t1:/home#>cat id_rsa.pub >> /.ssh/authorized_keys t1:/home#>cat /.ssh/authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAww2LlIJZxfAgLiIm8dPq+glByIziJC8L3294c3lTgvPDswNPlzf4PBB8+cz/hGoehuQMBP4l8tYONFABOxsMLFkYpxjv9EKL9SQ4PTiqPV+FJwaaWEK9fg/FD+JXwL1KHetyaYHAmgFzJFrAF7XIO+1303sRkOSOzYUSWMgPG5X8cH22sSchUgwed6xsxBkcx3oknirJp24mvfRmG+WFQB84FN04e0dSdcrsU3BMOYq0QZCqGQsHdGOak70legxHI4njq7DPJFM9vTiYVRsl2ylPzi65a3bWwT3XjwyHA2s+QNVYBftVfCe5wfPHmsu/arS3zyimcM+nCYxpkUs69Q== root@t1 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAu3rX612XoGaOBdvD5TpgjpfXZCx6SiXA6A+5n/AAt3Av6ilVelZ40mMK07qg2/l+586yjrkAdyUKKJ+GstGovGWZHqKnLOSiSpmkYMRHplKArW4nyrK7MPMn6YL8WDz/lF8HNd157usesqzFA3R1IpiDKfTdd22z/4EQXJzljbkblZCZTJ/QrlfksXw2XrrmcPfl8g35od3Cid4rOm7UyWiIYHNMZGCxYHlFxdw9Z+o/I85Mu6mbZOlP8AGeoq4QmjvGFeOv/WM95nDymXebB3OT9XPgKV/8HFRaMXlh+9aBBsKctxYixswzjOuuMpZohMqwbp1yaFHScWYoxsa3rQ== root@t2 t2:/home#>cat id_rsa.pub >> /.ssh/authorized_keys t2:/home#>cat /.ssh/authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAu3rX612XoGaOBdvD5TpgjpfXZCx6SiXA6A+5n/AAt3Av6ilVelZ40mMK07qg2/l+586yjrkAdyUKKJ+GstGovGWZHqKnLOSiSpmkYMRHplKArW4nyrK7MPMn6YL8WDz/lF8HNd157usesqzFA3R1IpiDKfTdd22z/4EQXJzljbkblZCZTJ/QrlfksXw2XrrmcPfl8g35od3Cid4rOm7UyWiIYHNMZGCxYHlFxdw9Z+o/I85Mu6mbZOlP8AGeoq4QmjvGFeOv/WM95nDymXebB3OT9XPgKV/8HFRaMXlh+9aBBsKctxYixswzjOuuMpZohMqwbp1yaFHScWYoxsa3rQ== root@t2 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAww2LlIJZxfAgLiIm8dPq+glByIziJC8L3294c3lTgvPDswNPlzf4PBB8+cz/hGoehuQMBP4l8tYONFABOxsMLFkYpxjv9EKL9SQ4PTiqPV+FJwaaWEK9fg/FD+JXwL1KHetyaYHAmgFzJFrAF7XIO+1303sRkOSOzYUSWMgPG5X8cH22sSchUgwed6xsxBkcx3oknirJp24mvfRmG+WFQB84FN04e0dSdcrsU3BMOYq0QZCqGQsHdGOak70legxHI4njq7DPJFM9vTiYVRsl2ylPzi65a3bWwT3XjwyHA2s+QNVYBftVfCe5wfPHmsu/arS3zyimcM+nCYxpkUs69Q== root@t1 t2:/home#>

Finally, all of id_rsa.pub files on each node. And Copy to authorized_keys. This file include rsa key of all nodes finder print. Also, Windows GPFS Client side will need same operation.

Page 52: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

52

12. AIX, Linux GPFS Server Installation

Update HOST Information on AIX Server. Check pkg List and running smitty. Choice Install and Update from ALL Available Software

Define location of Installation file.

Page 53: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

53

Press “F4” Choice pkg for installation, and press “F7”

Page 54: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

54

Change Accept New License agreements. And press “Enter”

You must update latest version of GPFS. If it is not complete, GPFS daemon will not start. This is same procedure for update. Update user profile.

Page 55: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

55

Check Installed Status

Update HOST information on Linux Server Install base package Install Update Package and Check Installation Status.

Page 56: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

56

Update user profile.

Make and Install portable layer on Linux System. This step is only Linux System. This is build gpfs module layer for Linux Kernel.

Page 57: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

57

13. Make Cluster and Configure GPFS Solution

Edit GPFS Cluster Configure file team4_1:/tmp/gpfs#>vi gpfs.allnodes

gpfs_node1:quorum-manager

gpfs_node2:quorum-manager

gpfs_node3:

Make a Cluster team4_1:/tmp/gpfs#>mmcrcluster -n /tmp/gpfs/gpfs.allnodes -p gpfs_node1 -s gpfs_node2 -C AIX_gpfs -r

/usr/bin/ssh -R /usr/bin/scp

Wed Oct 28 20:56:28 KORST 2009: 6027-1664 mmcrcluster: Processing node gpfs_node1

Wed Oct 28 20:56:29 KORST 2009: 6027-1664 mmcrcluster: Processing node gpfs_node2

Wed Oct 28 20:56:30 KORST 2009: 6027-1664 mmcrcluster: Processing node gpfs_node3

mmcrcluster: Command successfully completed

mmcrcluster: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

Check Node List team4_1:/tmp/gpfs#>mmlsnode -a

GPFS nodeset Node list

------------- -------------------------------------------------------

AIX_gpfs gpfs_node1 gpfs_node2 gpfs_node3

Check Default Config team4_1:/tmp/gpfs#>mmlsconfig

Configuration data for cluster AIX_gpfs.gpfs_node1:

---------------------------------------------------

clusterName AIX_gpfs.gpfs_node1

clusterId 13979456008081650028

clusterType lc

autoload no

minReleaseLevel 3.2.1.5

dmapiFileHandleSize 32

File systems in cluster AIX_gpfs.gpfs_node1:

--------------------------------------------

(none)

Page 58: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

58

Tiebreaker NSD Config file team4_1:/tmp/gpfs#>more disk.desc

hdisk2:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB1

hdisk3:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB2

hdisk4:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB3

This is recommendation mixed balancing primary and secondary nsd server.

team4_1:/tmp/gpfs#>more disk.desc

hdisk2:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB1

hdisk3:gpfs_node2:gpfs_node1:dataAndMetadata:1:TB2

hdisk4:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB3

Make a Tiebreaker NSD team4_1:/tmp/gpfs#>mmcrnsd -F /tmp/gpfs/disk.desc

mmcrnsd: Processing disk hdisk2

mmcrnsd: Processing disk hdisk3

mmcrnsd: Processing disk hdisk4

mmcrnsd: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

Check nsd config file after make Tiebreaker nsd team4_1:/tmp/gpfs#>more disk.desc

# hdisk2:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB1

TB1:::dataAndMetadata:1::

# hdisk3:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB2

TB2:::dataAndMetadata:1::

# hdisk4:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB3

TB3:::dataAndMetadata:1::

Register tiebreaker nsd team4_1:/tmp/gpfs#>mmchconfig tiebreakerDisks="TB1;TB2;TB3"

Verifying GPFS is stopped on all nodes ...

mmchconfig: Command successfully completed

mmchconfig: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

team4_1:/#>mmlsconfig

Configuration data for cluster AIX_gpfs.gpfs_node1:

-----------------------------------------------------

clusterName AIX_gpfs.gpfs_node1

clusterId 13979456008081616877

Page 59: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

59

clusterType lc

autoload no

minReleaseLevel 3.2.1.5

dmapiFileHandleSize 32

tiebreakerDisks TB1;TB2;TB3

Filesystem NSD team4_1:/tmp/gpfs#>more disk2.desc

hdisk5:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_01

hdisk6:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_02

team4_1:/tmp/gpfs#>mmcrnsd -F /tmp/gpfs/disk2.desc

mmcrnsd: Processing disk hdisk5

mmcrnsd: Processing disk hdisk6

mmcrnsd: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

team4_1:/tmp/gpfs#>more disk2.desc

# hdisk5:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_01

nsd_01:::dataAndMetadata:1::

# hdisk6:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_02

nsd_02:::dataAndMetadata:1::

team4_1:/tmp/gpfs#>more /tmp/gpfs/disk3.desc

hdisk7:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_03

hdisk8:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_04

team4_1:/tmp/gpfs#>mmcrnsd -F /tmp/gpfs/disk3.desc

mmcrnsd: Processing disk hdisk7

mmcrnsd: Processing disk hdisk8

mmcrnsd: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

team4_1:/tmp/gpfs#>more /tmp/gpfs/disk3.desc

# hdisk7:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_03

nsd_03:::dataAndMetadata:1::

# hdisk8:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_04

nsd_04:::dataAndMetadata:1::

team4_1:/tmp/gpfs#>mmlsnsd

File system Disk name NSD servers

---------------------------------------------------------------------------

(free disk) TB1 gpfs_node1,gpfs_node2

(free disk) TB2 gpfs_node1,gpfs_node2

(free disk) TB3 gpfs_node1,gpfs_node2

(free disk) nsd_01 gpfs_node1,gpfs_node2

Page 60: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

60

(free disk) nsd_02 gpfs_node1,gpfs_node2

(free disk) nsd_03 gpfs_node1,gpfs_node2

(free disk) nsd_04 gpfs_node1,gpfs_node2

Start GPFS Cluster Daemon. team4_1:/tmp/gpfs#>mmstartup -a

Wed Oct 28 21:10:54 KORST 2009: 6027-1642 mmstartup: Starting GPFS ...

team4_1:/tmp/gpfs#>mmgetstate -a

Node number Node name GPFS state

------------------------------------------

1 gpfs_node1 active

2 gpfs_node2 active

3 gpfs_node3 arbitrating

Make a File system. team4_1:/tmp/gpfs#>mmcrfs /gpfs01 /dev/gpfs01 -F /tmp/gpfs/disk2.desc -B 1024k -n 10 -N 5000

GPFS: 6027-531 The following disks of gpfs01 will be formatted on node team4_1:

nsd_01: size 52428800 KB

nsd_02: size 52428800 KB

GPFS: 6027-540 Formatting file system ...

GPFS: 6027-535 Disks up to size 535 GB can be added to storage pool 'system'.

Creating Inode File

Creating Allocation Maps

Clearing Inode Allocation Map

Clearing Block Allocation Map

Formatting Allocation Map for storage pool 'system'

GPFS: 6027-572 Completed creation of file system /dev/gpfs01.

mmcrfs: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

team4_1:/tmp/gpfs#>mmcrfs /gpfs02 /dev/gpfs02 -F /tmp/gpfs/disk3.desc -B 256k -n 10 -N 5000

GPFS: 6027-531 The following disks of gpfs02 will be formatted on node team4_2:

nsd_03: size 52428800 KB

nsd_04: size 52428800 KB

GPFS: 6027-540 Formatting file system ...

GPFS: 6027-535 Disks up to size 710 GB can be added to storage pool 'system'.

Creating Inode File

Creating Allocation Maps

Clearing Inode Allocation Map

Page 61: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

61

Clearing Block Allocation Map

Formatting Allocation Map for storage pool 'system'

GPFS: 6027-572 Completed creation of file system /dev/gpfs02.

mmcrfs: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

Mount file system. team4_1:/tmp/gpfs#>mmmount /gpfs01

Wed Oct 28 21:33:01 KORST 2009: 6027-1623 mmmount: Mounting file systems ...

team4_1:/tmp/gpfs#>mmmount /gpfs02

Wed Oct 28 21:33:06 KORST 2009: 6027-1623 mmmount: Mounting file systems ...

team4_1:/gpfs02#>df -gt

Filesystem GB blocks Used Free %Used Mounted on

...

/dev/gpfs01 100.00 0.06 99.94 1% /gpfs01

/dev/gpfs02 100.00 0.07 99.93 1% /gpfs02

team4_2:/#>df -gt

Filesystem GB blocks Used Free %Used Mounted on

...

/dev/gpfs01 100.00 0.75 99.25 1% /gpfs01

/dev/gpfs02 100.00 0.19 99.81 1% /gpfs02

team4_3:/#>df -gt

Filesystem GB blocks Used Free %Used Mounted on

...

/dev/gpfs01 100.00 0.06 99.94 1% /gpfs01

/dev/gpfs02 100.00 0.07 99.93 1% /gpfs02

Page 62: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

62

14. pLinux GPFS Client Installation

First, running ssh-keygen and this key file sync all of gpfs nodes. And set to disable Selinux and iptables service. After need to one time system reboots. Check Network Configuration [root@plinux ~]# ifconfig

eth0 Link encap:Ethernet HWaddr 00:14:5E:5F:4F:B5

inet addr:185.100.100.147 Bcast:185.100.100.255 Mask:255.255.255.0

inet6 addr: fe80::214:5eff:fe5f:4fb5/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:3809 errors:0 dropped:0 overruns:0 frame:0

TX packets:1548 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:460021 (449.2 KiB) TX bytes:395167 (385.9 KiB)

eth1 Link encap:Ethernet HWaddr 00:14:5E:5F:4F:B6

inet addr:194.1.1.48 Bcast:194.1.1.255 Mask:255.255.255.0

inet6 addr: fe80::214:5eff:fe5f:4fb6/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:16 errors:0 dropped:0 overruns:0 frame:0

TX packets:25 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:16404 (16.0 KiB) TX bytes:10400 (10.1 KiB)

Edit host file [root@plinux ~]# vi /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 plinux localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

185.100.100.147 plinux

## GPFS Network ##

194.1.1.44 gpfs_node1

194.1.1.45 gpfs_node2

194.1.1.46 gpfs_node3

194.1.1.47 gpfs_node4

194.1.1.48 gpfs_node5

Page 63: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

63

Update user profile [root@plinux ~]# tail /etc/bashrc

PATH=$PATH:$HOME/bin:/usr/lpp/mmfs/bin

MANPATH=$MANPATH:/usr/lpp/mmfs/messages

Installation Pkg [root@plinux gpfs]# ls

gpfs-3.3.0-1.sles.ppc64.update.tar.gz gpfs.docs-3.3.0-0.noarch.rpm gpfs.gui-3.3.0-0.sles9.ppc64.rpm

gpfs.base-3.3.0-0.sles.ppc64.rpm gpfs.gpl-3.3.0-0.noarch.rpm gpfs.msg.en_US-3.3.0-0.noarch.rpm

[root@plinux gpfs]# rpm -ivh gpfs.base-3.3.0-0.sles.ppc64.rpm

Preparing... ########################################### [100%]

1:gpfs.base ########################################### [100%]

[root@plinux gpfs]# rpm -ivh gpfs.docs-3.3.0-0.noarch.rpm

Preparing... ########################################### [100%]

1:gpfs.docs ########################################### [100%]

[root@plinux gpfs]# rpm -ivh gpfs.gpl-3.3.0-0.noarch.rpm

Preparing... ########################################### [100%]

1:gpfs.gpl ########################################### [100%]

[root@plinux gpfs]# rpm -ivh gpfs.msg.en_US-3.3.0-0.noarch.rpm

Preparing... ########################################### [100%]

1:gpfs.msg.en_US ########################################### [100%]

[root@plinux gpfs]# gzip -d gpfs-3.3.0-1.sles.ppc64.update.tar.gz

[root@plinux gpfs]# tar -xf gpfs-3.3.0-1.sles.ppc64.update.tar

[root@plinux gpfs]# rpm -qa|grep gpfs

gpfs.msg.en_US-3.3.0-1

gpfs.gpl-3.3.0-1

gpfs.docs-3.3.0-1

gpfs.base-3.3.0-1

Make portable layer [root@plinux src]# make Autoconfig

cd /usr/lpp/mmfs/src/config; ./configure --genenvonly; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1

[root@plinux src]# make World

Verifying that tools to build the portability layer exist....

cpp present

gcc present

g++ present

ld present

Page 64: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

64

cd /usr/lpp/mmfs/src/config; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1

rm -rf /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib

mkdir /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib

rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver

for i in ibm-kxi ibm-linux gpl-linux ; do \

(cd $i; echo "cleaning" "(`pwd`)"; \

/usr/bin/make DESTDIR=/usr/lpp/mmfs/src Clean; \

exit $?) || exit 1; \

done

cleaning (/usr/lpp/mmfs/src/ibm-kxi)

make[1]: Entering directory `/usr/lpp/mmfs/src/ibm-kxi'

rm -f ibm_kxi.trclst

rm -f install.he; \

for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h

cxiMmap.h cxiAtomic.h cxiTSFattr.h cxiAclUser.h cxiLinkList.h cxiDmapi.h LockNames.h lxtrace.h DirIds.h; do \

(set -x; rm -f -r /usr/lpp/mmfs/src/include/cxi/$i) done

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h

make[1]: Leaving directory `/usr/lpp/mmfs/src/ibm-kxi'

cleaning (/usr/lpp/mmfs/src/ibm-linux)

make[1]: Entering directory `/usr/lpp/mmfs/src/ibm-linux'

rm -f install.he; \

for i in cxiTypes-plat.h cxiSystem-plat.h cxiIOBuffer-plat.h cxiSharedSeg-plat.h cxiMode-plat.h Trace-plat.h

cxiAtomic-plat.h cxiMmap-plat.h cxiVFSStats-plat.h cxiCred-plat.h cxiDmapi-plat.h; do \

(set -x; rm -rf /usr/lpp/mmfs/src/include/cxi/$i) done

Page 65: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

65

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h

make[1]: Leaving directory `/usr/lpp/mmfs/src/ibm-linux'

cleaning (/usr/lpp/mmfs/src/gpl-linux)

make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux'

Pre-kbuild step 1...

/usr/bin/make -C /lib/modules/2.6.18-128.el5/build M=/usr/lpp/mmfs/src/gpl-linux clean

make[2]: Entering directory `/usr/src/kernels/2.6.18-128.el5-ppc64'

make[2]: Leaving directory `/usr/src/kernels/2.6.18-128.el5-ppc64'

rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko

rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko

rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko

rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`

rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`

rm -f -f *.o *~ .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver

install.he

rm -f -rf .tmp_versions kdump-kern-dwarfs.c

rm -f -f gpl-linux.trclst kdump lxtrace

rm -f -rf usr

make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux'

for i in ibm-kxi ibm-linux gpl-linux ; do \

(cd $i; echo "installing header files" "(`pwd`)"; \

/usr/bin/make DESTDIR=/usr/lpp/mmfs/src Headers; \

exit $?) || exit 1; \

done

installing header files (/usr/lpp/mmfs/src/ibm-kxi)

make[1]: Entering directory `/usr/lpp/mmfs/src/ibm-kxi'

Making directory /usr/lpp/mmfs/src/include/cxi

+ /usr/bin/install cxiTypes.h /usr/lpp/mmfs/src/include/cxi/cxiTypes.h

+ /usr/bin/install cxiSystem.h /usr/lpp/mmfs/src/include/cxi/cxiSystem.h

+ /usr/bin/install cxi2gpfs.h /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h

+ /usr/bin/install cxiVFSStats.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h

Page 66: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

66

+ /usr/bin/install cxiCred.h /usr/lpp/mmfs/src/include/cxi/cxiCred.h

+ /usr/bin/install cxiIOBuffer.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h

+ /usr/bin/install cxiSharedSeg.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h

+ /usr/bin/install cxiMode.h /usr/lpp/mmfs/src/include/cxi/cxiMode.h

+ /usr/bin/install Trace.h /usr/lpp/mmfs/src/include/cxi/Trace.h

+ /usr/bin/install cxiMmap.h /usr/lpp/mmfs/src/include/cxi/cxiMmap.h

+ /usr/bin/install cxiAtomic.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h

+ /usr/bin/install cxiTSFattr.h /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h

+ /usr/bin/install cxiAclUser.h /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h

+ /usr/bin/install cxiLinkList.h /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h

+ /usr/bin/install cxiDmapi.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h

+ /usr/bin/install LockNames.h /usr/lpp/mmfs/src/include/cxi/LockNames.h

+ /usr/bin/install lxtrace.h /usr/lpp/mmfs/src/include/cxi/lxtrace.h

+ /usr/bin/install DirIds.h /usr/lpp/mmfs/src/include/cxi/DirIds.h

touch install.he

make[1]: Leaving directory `/usr/lpp/mmfs/src/ibm-kxi'

installing header files (/usr/lpp/mmfs/src/ibm-linux)

make[1]: Entering directory `/usr/lpp/mmfs/src/ibm-linux'

+ /usr/bin/install cxiTypes-plat.h /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h

+ /usr/bin/install cxiSystem-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h

+ /usr/bin/install cxiIOBuffer-plat.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h

+ /usr/bin/install cxiSharedSeg-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h

+ /usr/bin/install cxiMode-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h

+ /usr/bin/install Trace-plat.h /usr/lpp/mmfs/src/include/cxi/Trace-plat.h

+ /usr/bin/install cxiAtomic-plat.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h

+ /usr/bin/install cxiMmap-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h

+ /usr/bin/install cxiVFSStats-plat.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h

+ /usr/bin/install cxiCred-plat.h /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h

+ /usr/bin/install cxiDmapi-plat.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h

touch install.he

make[1]: Leaving directory `/usr/lpp/mmfs/src/ibm-linux'

installing header files (/usr/lpp/mmfs/src/gpl-linux)

make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux'

Making directory /usr/lpp/mmfs/src/include/gpl-linux

+ /usr/bin/install Shark-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Shark-gpl.h

+ /usr/bin/install prelinux.h /usr/lpp/mmfs/src/include/gpl-linux/prelinux.h

+ /usr/bin/install postlinux.h /usr/lpp/mmfs/src/include/gpl-linux/postlinux.h

+ /usr/bin/install linux2gpfs.h /usr/lpp/mmfs/src/include/gpl-linux/linux2gpfs.h

+ /usr/bin/install verdep.h /usr/lpp/mmfs/src/include/gpl-linux/verdep.h

+ /usr/bin/install Logger-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Logger-gpl.h

+ /usr/bin/install arch-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/arch-gpl.h

Page 67: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

67

+ /usr/bin/install oplock.h /usr/lpp/mmfs/src/include/gpl-linux/oplock.h

touch install.he

make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux'

make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux'

Pre-kbuild step 1...

Pre-kbuild step 2...

touch install.he

Invoking Kbuild...

make[2]: Entering directory `/usr/src/kernels/2.6.18-128.el5-ppc64'

LD /usr/lpp/mmfs/src/gpl-linux/built-in.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/kdump-stub.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/mmwrap.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/ppc64/ss_ppc64.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfslinux.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.o

HOSTCC /usr/lpp/mmfs/src/gpl-linux/lxtrace.o

HOSTCC /usr/lpp/mmfs/src/gpl-linux/lxtrace_rl.o

HOSTLD /usr/lpp/mmfs/src/gpl-linux/lxtrace

Building modules, stage 2.

MODPOST

WARNING: could not find /usr/lpp/mmfs/src/gpl-linux/.mmfs.o_shipped.cmd for /usr/lpp/mmfs/src/gpl-

linux/mmfs.o_shipped

WARNING: could not find /usr/lpp/mmfs/src/gpl-linux/.libgcc.a_shipped.cmd for /usr/lpp/mmfs/src/gpl-

linux/libgcc.a_shipped

CC /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.mod.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.ko

CC /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.mod.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.ko

CC /usr/lpp/mmfs/src/gpl-linux/mmfs26.mod.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.ko

CC /usr/lpp/mmfs/src/gpl-linux/mmfslinux.mod.o

Page 68: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

68

LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfslinux.ko

CC /usr/lpp/mmfs/src/gpl-linux/tracedev.mod.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.ko

make[2]: Leaving directory `/usr/src/kernels/2.6.18-128.el5-ppc64'

cc kdump.o kdump-kern.o kdump-kern-dwarfs.o -o kdump -melf64ppc -m64 -lpthread

make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux'

for i in ibm-kxi ibm-linux gpl-linux; do \

(cd $i; echo "installing" "(`pwd`)"; \

/usr/bin/make DESTDIR=/usr/lpp/mmfs/src install; \

exit $?) || exit 1; \

done

installing (/usr/lpp/mmfs/src/ibm-kxi)

make[1]: Entering directory `/usr/lpp/mmfs/src/ibm-kxi'

touch install.he

make[1]: Leaving directory `/usr/lpp/mmfs/src/ibm-kxi'

installing (/usr/lpp/mmfs/src/ibm-linux)

make[1]: Entering directory `/usr/lpp/mmfs/src/ibm-linux'

touch install.he

make[1]: Leaving directory `/usr/lpp/mmfs/src/ibm-linux'

installing (/usr/lpp/mmfs/src/gpl-linux)

make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux'

/usr/bin/install -c -m 0500 lxtrace /usr/lpp/mmfs/src/bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`

/usr/bin/install -c -m 0500 kdump /usr/lpp/mmfs/src/bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`

make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux'

[root@plinux src]#

[root@plinux src]# make World

Verifying that tools to build the portability layer exist....

cpp present

gcc present

g++ present

ld present

cd /usr/lpp/mmfs/src/config; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1

rm -rf /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib

mkdir /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib

rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver

for i in ibm-kxi ibm-linux gpl-linux ; do \

(cd $i; echo "cleaning" "(`pwd`)"; \

/usr/bin/make DESTDIR=/usr/lpp/mmfs/src Clean; \

exit $?) || exit 1; \

Page 69: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

69

done

cleaning (/usr/lpp/mmfs/src/ibm-kxi)

make[1]: Entering directory `/usr/lpp/mmfs/src/ibm-kxi'

rm -f ibm_kxi.trclst

rm -f install.he; \

for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h

cxiMmap.h cxiAtomic.h cxiTSFattr.h cxiAclUser.h cxiLinkList.h cxiDmapi.h LockNames.h lxtrace.h DirIds.h; do \

(set -x; rm -f -r /usr/lpp/mmfs/src/include/cxi/$i) done

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h

+ rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h

make[1]: Leaving directory `/usr/lpp/mmfs/src/ibm-kxi'

cleaning (/usr/lpp/mmfs/src/ibm-linux)

make[1]: Entering directory `/usr/lpp/mmfs/src/ibm-linux'

rm -f install.he; \

for i in cxiTypes-plat.h cxiSystem-plat.h cxiIOBuffer-plat.h cxiSharedSeg-plat.h cxiMode-plat.h Trace-plat.h

cxiAtomic-plat.h cxiMmap-plat.h cxiVFSStats-plat.h cxiCred-plat.h cxiDmapi-plat.h; do \

(set -x; rm -rf /usr/lpp/mmfs/src/include/cxi/$i) done

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h

Page 70: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

70

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h

+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h

make[1]: Leaving directory `/usr/lpp/mmfs/src/ibm-linux'

cleaning (/usr/lpp/mmfs/src/gpl-linux)

make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux'

Pre-kbuild step 1...

/usr/bin/make -C /lib/modules/2.6.18-128.el5/build M=/usr/lpp/mmfs/src/gpl-linux clean

make[2]: Entering directory `/usr/src/kernels/2.6.18-128.el5-ppc64'

CLEAN /usr/lpp/mmfs/src/gpl-linux

CLEAN /usr/lpp/mmfs/src/gpl-linux/.tmp_versions

make[2]: Leaving directory `/usr/src/kernels/2.6.18-128.el5-ppc64'

rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko

rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko

rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko

rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`

rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`

rm -f -f *.o *~ .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver

install.he

rm -f -rf .tmp_versions kdump-kern-dwarfs.c

rm -f -f gpl-linux.trclst kdump lxtrace

rm -f -rf usr

make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux'

for i in ibm-kxi ibm-linux gpl-linux ; do \

(cd $i; echo "installing header files" "(`pwd`)"; \

/usr/bin/make DESTDIR=/usr/lpp/mmfs/src Headers; \

exit $?) || exit 1; \

done

installing header files (/usr/lpp/mmfs/src/ibm-kxi)

make[1]: Entering directory `/usr/lpp/mmfs/src/ibm-kxi'

Making directory /usr/lpp/mmfs/src/include/cxi

+ /usr/bin/install cxiTypes.h /usr/lpp/mmfs/src/include/cxi/cxiTypes.h

+ /usr/bin/install cxiSystem.h /usr/lpp/mmfs/src/include/cxi/cxiSystem.h

+ /usr/bin/install cxi2gpfs.h /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h

+ /usr/bin/install cxiVFSStats.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h

+ /usr/bin/install cxiCred.h /usr/lpp/mmfs/src/include/cxi/cxiCred.h

+ /usr/bin/install cxiIOBuffer.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h

+ /usr/bin/install cxiSharedSeg.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h

+ /usr/bin/install cxiMode.h /usr/lpp/mmfs/src/include/cxi/cxiMode.h

+ /usr/bin/install Trace.h /usr/lpp/mmfs/src/include/cxi/Trace.h

+ /usr/bin/install cxiMmap.h /usr/lpp/mmfs/src/include/cxi/cxiMmap.h

Page 71: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

71

+ /usr/bin/install cxiAtomic.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h

+ /usr/bin/install cxiTSFattr.h /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h

+ /usr/bin/install cxiAclUser.h /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h

+ /usr/bin/install cxiLinkList.h /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h

+ /usr/bin/install cxiDmapi.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h

+ /usr/bin/install LockNames.h /usr/lpp/mmfs/src/include/cxi/LockNames.h

+ /usr/bin/install lxtrace.h /usr/lpp/mmfs/src/include/cxi/lxtrace.h

+ /usr/bin/install DirIds.h /usr/lpp/mmfs/src/include/cxi/DirIds.h

touch install.he

make[1]: Leaving directory `/usr/lpp/mmfs/src/ibm-kxi'

installing header files (/usr/lpp/mmfs/src/ibm-linux)

make[1]: Entering directory `/usr/lpp/mmfs/src/ibm-linux'

+ /usr/bin/install cxiTypes-plat.h /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h

+ /usr/bin/install cxiSystem-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h

+ /usr/bin/install cxiIOBuffer-plat.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h

+ /usr/bin/install cxiSharedSeg-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h

+ /usr/bin/install cxiMode-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h

+ /usr/bin/install Trace-plat.h /usr/lpp/mmfs/src/include/cxi/Trace-plat.h

+ /usr/bin/install cxiAtomic-plat.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h

+ /usr/bin/install cxiMmap-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h

+ /usr/bin/install cxiVFSStats-plat.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h

+ /usr/bin/install cxiCred-plat.h /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h

+ /usr/bin/install cxiDmapi-plat.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h

touch install.he

make[1]: Leaving directory `/usr/lpp/mmfs/src/ibm-linux'

installing header files (/usr/lpp/mmfs/src/gpl-linux)

make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux'

Making directory /usr/lpp/mmfs/src/include/gpl-linux

+ /usr/bin/install Shark-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Shark-gpl.h

+ /usr/bin/install prelinux.h /usr/lpp/mmfs/src/include/gpl-linux/prelinux.h

+ /usr/bin/install postlinux.h /usr/lpp/mmfs/src/include/gpl-linux/postlinux.h

+ /usr/bin/install linux2gpfs.h /usr/lpp/mmfs/src/include/gpl-linux/linux2gpfs.h

+ /usr/bin/install verdep.h /usr/lpp/mmfs/src/include/gpl-linux/verdep.h

+ /usr/bin/install Logger-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Logger-gpl.h

+ /usr/bin/install arch-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/arch-gpl.h

+ /usr/bin/install oplock.h /usr/lpp/mmfs/src/include/gpl-linux/oplock.h

touch install.he

make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux'

make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux'

Pre-kbuild step 1...

Pre-kbuild step 2...

Page 72: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

72

touch install.he

Invoking Kbuild...

make[2]: Entering directory `/usr/src/kernels/2.6.18-128.el5-ppc64'

LD /usr/lpp/mmfs/src/gpl-linux/built-in.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/kdump-stub.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/mmwrap.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/ppc64/ss_ppc64.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfslinux.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.o

CC [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.o

HOSTCC /usr/lpp/mmfs/src/gpl-linux/lxtrace.o

HOSTCC /usr/lpp/mmfs/src/gpl-linux/lxtrace_rl.o

HOSTLD /usr/lpp/mmfs/src/gpl-linux/lxtrace

Building modules, stage 2.

MODPOST

WARNING: could not find /usr/lpp/mmfs/src/gpl-linux/.mmfs.o_shipped.cmd for /usr/lpp/mmfs/src/gpl-

linux/mmfs.o_shipped

WARNING: could not find /usr/lpp/mmfs/src/gpl-linux/.libgcc.a_shipped.cmd for /usr/lpp/mmfs/src/gpl-

linux/libgcc.a_shipped

CC /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.mod.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.ko

CC /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.mod.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.ko

CC /usr/lpp/mmfs/src/gpl-linux/mmfs26.mod.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.ko

CC /usr/lpp/mmfs/src/gpl-linux/mmfslinux.mod.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfslinux.ko

CC /usr/lpp/mmfs/src/gpl-linux/tracedev.mod.o

LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.ko

make[2]: Leaving directory `/usr/src/kernels/2.6.18-128.el5-ppc64'

cc kdump.o kdump-kern.o kdump-kern-dwarfs.o -o kdump -melf64ppc -m64 -lpthread

make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux'

Page 73: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

73

for i in ibm-kxi ibm-linux gpl-linux; do \

(cd $i; echo "installing" "(`pwd`)"; \

/usr/bin/make DESTDIR=/usr/lpp/mmfs/src install; \

exit $?) || exit 1; \

done

installing (/usr/lpp/mmfs/src/ibm-kxi)

make[1]: Entering directory `/usr/lpp/mmfs/src/ibm-kxi'

touch install.he

make[1]: Leaving directory `/usr/lpp/mmfs/src/ibm-kxi'

installing (/usr/lpp/mmfs/src/ibm-linux)

make[1]: Entering directory `/usr/lpp/mmfs/src/ibm-linux'

touch install.he

make[1]: Leaving directory `/usr/lpp/mmfs/src/ibm-linux'

installing (/usr/lpp/mmfs/src/gpl-linux)

make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux'

/usr/bin/install -c -m 0500 lxtrace /usr/lpp/mmfs/src/bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`

/usr/bin/install -c -m 0500 kdump /usr/lpp/mmfs/src/bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`

make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux'

[root@plinux src]# make InstallImages

(cd gpl-linux; /usr/bin/make InstallImages; \

exit $?) || exit 1

make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux'

Pre-kbuild step 1...

make[2]: Entering directory `/usr/src/kernels/2.6.18-128.el5-ppc64'

INSTALL /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.ko

INSTALL /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.ko

INSTALL /usr/lpp/mmfs/src/gpl-linux/mmfs26.ko

INSTALL /usr/lpp/mmfs/src/gpl-linux/mmfslinux.ko

INSTALL /usr/lpp/mmfs/src/gpl-linux/tracedev.ko

DEPMOD 2.6.18-128.el5

make[2]: Leaving directory `/usr/src/kernels/2.6.18-128.el5-ppc64'

make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux'

[root@plinux src]#

Add Nodes team4_1:/tmp/gpfs#>mmaddnode -N gpfs_node5

Thu Oct 29 16:11:27 KORST 2009: 6027-1664 mmaddnode: Processing node gpfs_node5

mmaddnode: Command successfully completed

mmaddnode: 6027-1254 Warning: Not all nodes have proper GPFS license designations.

Use the mmchlicense command to designate licenses as needed.

Page 74: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

74

mmaddnode: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

team4_1:/tmp/gpfs#>mmlsnode -a

===============================================================================

| Warning: |

| This cluster contains nodes that do not have a proper GPFS license |

| designation. This violates the terms of the GPFS licensing agreement. |

| Use the mmchlicense command and assign the appropriate GPFS licenses |

| to each of the nodes in the cluster. For more information about GPFS |

| license designation, see the Concepts, Planning, and Installation Guide. |

===============================================================================

GPFS nodeset Node list

------------- -------------------------------------------------------

AIX_gpfs gpfs_node1 gpfs_node2 gpfs_node3 gpfs_node4 gpfs_node5

Accept License team4_1:/tmp/gpfs#>mmchlicense client -N gpfs_node5

The following nodes will be designated as possessing GPFS client licenses:

gpfs_node5

Please confirm that you accept the terms of the GPFS client Licensing Agreement.

The full text can be found at www.ibm.com/software/sla

Enter "yes" or "no": yes

mmchlicense: Command successfully completed

mmchlicense: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

Mount File system [root@plinux .ssh]# mmstartup

Thu Oct 29 15:13:16 CST 2009: mmstartup: Starting GPFS ...

[root@plinux .ssh]# mmmount /gpfs01

Thu Oct 29 15:13:26 CST 2009: mmmount: Mounting file systems ...

[root@plinux .ssh]# df

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

47358856 4478464 40435900 10% /

/dev/sda2 101082 17572 78291 19% /boot

tmpfs 839872 0 839872 0% /dev/shm

Page 75: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

75

/dev/gpfs02 104857600 440064 104417536 1% /gpfs02

/dev/gpfs_test 104857600 1013760 103843840 1% /gpfs_test

/dev/gpfs01 104857600 781312 104076288 1% /gpfs01

Check NSD Congiruation [root@plinux .ssh]# mmlsnsd -a

File system Disk name NSD servers

---------------------------------------------------------------------------

gpfs01 nsd_01 gpfs_node1,gpfs_node2

gpfs01 nsd_02 gpfs_node1,gpfs_node2

gpfs02 nsd_03 gpfs_node1,gpfs_node2

gpfs02 nsd_04 gpfs_node1,gpfs_node2

gpfs_test nsd_05 gpfs_node1,gpfs_node2

gpfs_test nsd_06 gpfs_node1,gpfs_node2

(free disk) TB1 gpfs_node1,gpfs_node2

(free disk) TB2 gpfs_node1,gpfs_node2

(free disk) TB3 gpfs_node1,gpfs_node2

Volume Status

Page 76: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

76

15. Windows 2008 SP2 GPFS Client Installation

Check OS Version of Windows Server Change locale and keyboard to US and English.

Page 77: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

77

Page 78: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

78

Complete change Enligsh(US) Locale Setup and next user configuration.

Page 79: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

79

Choice root account. The root user change admin account type.

Page 80: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

80

Configuration Firewall - Set to disable

UAC Disable

Page 81: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

81

System Reboot Installation Utility for subsystem for UNIX-Based Applications

Page 82: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

82

Choice SUA Package

Page 83: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

83

Page 84: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

84

SUA Installation

Download SUA Package Site Link: http://www.microsoft.com/downloads/details.aspx?familyid=93ff2201-325e-487f-a398- efde5758c47f&displaylang=en&Hash=IKXVxKqCKZcIPQFORRixLddbWfc2mSSt9JKcfApD6FwVpzi2% 2f5oT4sIDTlhxY30lEcYD3MS9v1GgYwfy%2fUazew%3d%3d

Page 85: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

85

Setup

Page 86: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

86

Page 87: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

87

Complete Installation SUA Package

Open Korn Shell on Subsystem for UNIX-based Application, and login root user as Windows Admin

Password (su -) Installation Additional Package for SUA

Page 88: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

88

Update Hosts File

Page 89: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

89

Add c5 node what is windows gpfs cluster client node Installation GPFS v3.3 Base Pacakge

Page 90: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

90

This install is initial pakage. - Complete Base Pkg Install system reboot Uninstall Base Pkg - Installation Latest Version

Page 91: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

91

Control Panel

Generate SSH Key and Share on Windows Korn Shell. After make id_rsa.pub file then you must update Authorized_keys on AIX Server, This file must sync all of the gpfs cluster nodes.

Page 92: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

92

Generate SSH Key Copy key file to other node Open Key file. Copy Key Info

Page 93: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

93

Paste key info Test Remote Shell Accept GPFS License

- mmchlicense server --accept -N [windows server hostname] on Other GPFS Server

Page 94: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

94

Mapping Volume Name

- mmchfs device -t [windows Drive Letter]

Page 95: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

95

Check Drive Letter Mounted Volume

Mount All of the GPFS Volume

The limitation of GPFS v3.3 support for windows server 2008

GPFS v3.3 multicluster configurations that include Windows clients should not upgrade Windows machines with 3.3.0-3 or -4. You must install 3.3.0.-5 |when upgrading beyond 3.3.0-2 due to an issue with OpenSSL |introduced in 3.3.0-3. Go to Download Update pakage http://www14.software.ibm.com/webapp/set2/sas/f/gpfs/home.html

Windows nodes do not support directly accessing disks or operating as an NSD server. |This function is covered in the GPFS documentation |for planning purposes only. This FAQ will be updated with the tested |disk device support information when it is generally available.

Support for Windows Server 2008 R2 is not yet available. This FAQ will be updated when that support is available. (GPFS v3.4 Plan)

There is no migration path from Windows Server 2003 R2 (GPFS V3.2.1-5 or later) to Windows Server 2008 SP2 (GPFS V3.3).

Page 96: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

96

To move GPFS V3.2.1.5 or later nodes Windows nodes to GPFS V3.3:

1. Remove all the Windows nodes from your cluster. 2. Uninstall GPFS 3.2.1.5 from your Windows nodes. This step is not necessary if you are

reinstalling Windows Server 2008 from scratch (next step below) and not upgrading from Server 2003 R2.

3. Install Windows Server 2008 and the required prerequisites on the nodes. 4. Install GPFS 3.3 on the Windows Server 2008 nodes. 5. Migrate your AIX and Linux nodes from GPFS 3.2.1-5 or later, to GPFS V3.3. 6. Add the Windows nodes back to your cluster.

User exits defined by the mmaddcallback command and the three specialized user exits provided by GPFS are not currently supported on Windows nodes.

The following GPFS commands are not supported on Windows:

Mmapplypolicy , mmbackup , mmcheckquota , mmdefedquota

Mmdelacl , mmeditacl , mmedquota , mmgetacl

Mmlsquota , mmpmon , mmputacl , mmrepquota

The Tivoli® Storage Manager (TSM) Backup Archive client for Windows does not

support unique features of GPFS file systems. TSM backup and archiving

operations are supported on AIX and Linux nodes in a cluster that contains

Windows. For information on TSM backup archive client support for GPFS, see:

The GPFS Application Programming Interfaces (APIs) are not supported on Windows.

The native Windows backup utility is not supported.

Symbolic links that are created on UNIX-based nodes are specially handled by GPFS Windows nodes; they appear as regular files with a size of 0 and their contents cannot be accessed or modified.

GPFS on Windows nodes attempts to preserve data integrity between memory-mapped I/O and other forms of I/O on the same computation node. However, if the same file is memory mapped on more than one Windows node, data coherency is not guaranteed between the memory-mapped sections on these multiple nodes. In other words, GPFS on Windows does not provide distributed shared memory semantics. Therefore, applications that require data coherency between memory-mapped files on more than one node might not function as expected.

Page 97: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

97

16. Rolling Upgrade to v3.3 from v3.2

Check GPFS Cluster Configuration Status

Umount File System

Shutdown GPFS Daemon

Upgrage Base Pacakge

Page 98: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

98

Check Installation Status

Install Update Pkg

Compile Portable Layer

Page 99: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

99

Install GPFS Module

This entire step is same on each node, that is not require Shutdown all of the GPFS Cluster File Service during upgrade gpfs daemon. Rolling upgrade support operation mixed gpfs version after v3.x or higher. This is very useful function for live service for customer. Each node is upgrade gpfs daemon seperately, then file system version must change latest version.

Check GPFS Configuration

This warning is need to update license accept information, you shoud fallow below command. # mmchlicense client --accept –N w1 # mmchlicense server --accept –N l1,l2

Check File System Version

Page 100: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

100

File System Format Update Check Updated File System Version

Page 101: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

101

17. Add / Remove NSD – GPFS Maintanence

Add NSD

p1:/#>lspv hdisk0 00ca904f908ab237 rootvg active hdisk1 none gpfs1nsd hdisk2 none gpfs2nsd hdisk3 none gpfs3nsd hdisk4 none gpfs4nsd hdisk5 none gpfs5nsd hdisk6 none None hdisk7 none gpfs6nsd hdisk8 none gpfs7nsd hdisk9 none gpfs8nsd p1:/#>

Check Disk for Make NSD

p1:/#>mmlsnsd File system Disk name NSD servers --------------------------------------------------------------------------- TEAM02_AIX gpfs1nsd p1,p2 TEAM02_AIX gpfs2nsd p1,p2 TEAM02_AIX gpfs3nsd p1,p2 TEAM02_AIX gpfs4nsd p1,p2 TEAM02_AIX gpfs5nsd p1,p2 (free disk) gpfs6nsd p1,p2 (free disk) gpfs7nsd p1,p2 (free disk) gpfs8nsd p1,p2

Check NSD Configuration

p2:/#>mmlsdisk TEAM02_AIX -m Disk name IO performed on node Device Availability ------------ ----------------------- ----------------- ------------ gpfs1nsd localhost /dev/hdisk1 up gpfs2nsd localhost /dev/hdisk2 up gpfs3nsd localhost /dev/hdisk3 up gpfs4nsd localhost /dev/hdisk4 up gpfs5nsd localhost /dev/hdisk5 up

Check NSD for TEAM02_AIX Volume p1:/TEAM02_AIX#>mmdf TEAM02_AIX disk disk size failure holds holds free KB free KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ---------- Disks in storage pool: system (Maximum disk size allowed is 281 GB) gpfs1nsd 52428800 1 yes yes 48869632 ( 93%) 1376 ( 0%) gpfs2nsd 52428800 1 yes yes 48869888 ( 93%) 1072 ( 0%) gpfs3nsd 52428800 1 yes yes 48869376 ( 93%) 4256 ( 0%) gpfs4nsd 52428800 1 yes yes 48869376 ( 93%) 4952 ( 0%) gpfs5nsd 52428800 1 yes yes 48869632 ( 93%) 4840 ( 0%) ------------- -------------------- ------------------- (pool total) 262144000 244347904 ( 93%) 16496 ( 0%) ============= ==================== =================== (total) 262144000 244347904 ( 93%) 16496 ( 0%)

Page 102: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

102

Inode Information ----------------- Number of used inodes: 4069 Number of free inodes: 254491 Number of allocated inodes: 258560 Maximum number of inodes: 258560 p1:/TEAM02_AIX#>

Check File System Usage

p1:/tmp/gpfs#>cat disk.desc hdisk6:p1,p2::dataAndMetadata:1: p1:/tmp/gpfs#>mmcrnsd -F /tmp/gpfs/disk.desc mmcrnsd: Processing disk hdisk6 mmcrnsd: 6027-1371 Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.

Make NSD p1:/tmp/gpfs#>mmlsnsd File system Disk name NSD servers --------------------------------------------------------------------------- TEAM02_AIX gpfs1nsd p1,p2 TEAM02_AIX gpfs2nsd p1,p2 TEAM02_AIX gpfs3nsd p1,p2 TEAM02_AIX gpfs4nsd p1,p2 TEAM02_AIX gpfs5nsd p1,p2 (free disk) gpfs6nsd p1,p2 (free disk) gpfs7nsd p1,p2 (free disk) gpfs8nsd p1,p2 (free disk) gpfs9nsd p1,p2 Check Added NSD

p1:/tmp/gpfs#>mmadddisk TEAM02_AIX -F /tmp/gpfs/disk.desc GPFS: 6027-531 The following disks of TEAM02_AIX will be formatted on node p2: gpfs9nsd: size 52428800 KB Extending Allocation Map Checking Allocation Map for storage pool 'system' 20 % complete on Wed Oct 28 10:51:42 2009 39 % complete on Wed Oct 28 10:51:47 2009 59 % complete on Wed Oct 28 10:51:52 2009 78 % complete on Wed Oct 28 10:51:58 2009 98 % complete on Wed Oct 28 10:52:03 2009 100 % complete on Wed Oct 28 10:52:03 2009 GPFS: 6027-1503 Completed adding disks to file system TEAM02_AIX. mmadddisk: 6027-1371 Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. p1:/tmp/gpfs#>

Add New NSD to TEAM02_AIX Volume p1:/tmp/gpfs#>mmlsdisk TEAM02_AIX -M Disk name IO performed on node Device Availability ------------ ----------------------- ----------------- ------------ gpfs1nsd localhost /dev/hdisk1 up gpfs2nsd localhost /dev/hdisk2 up gpfs3nsd localhost /dev/hdisk3 up gpfs4nsd localhost /dev/hdisk4 up gpfs5nsd localhost /dev/hdisk5 up gpfs9nsd localhost /dev/hdisk6 up p1:/tmp/gpfs#> Check Added NSD on Configured Volume

Page 103: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

103

p1:/tmp/gpfs#>mmdf TEAM02_AIX disk disk size failure holds holds free KB free KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------- Disks in storage pool: system (Maximum disk size allowed is 281 GB) gpfs1nsd 52428800 1 yes yes 44486912 ( 85%) 6680 ( 0%) gpfs2nsd 52428800 1 yes yes 44487936 ( 85%) 6368 ( 0%) gpfs3nsd 52428800 1 yes yes 44487680 ( 85%) 10216 ( 0%) gpfs4nsd 52428800 1 yes yes 44487936 ( 85%) 11304 ( 0%) gpfs5nsd 52428800 1 yes yes 44488960 ( 85%) 7688 ( 0%) gpfs9nsd 52428800 1 yes yes 51213056 ( 98%) 376 ( 0%) ------------- -------------------- ------------------- (pool total) 314572800 273652480 ( 87%) 42632 ( 0%) ============= ==================== =================== (total) 314572800 273652480 ( 87%) 42632 ( 0%) Inode Information ----------------- Number of used inodes: 4082 Number of free inodes: 254478 Number of allocated inodes: 258560 Maximum number of inodes: 258560 p1:/tmp/gpfs#> Check NSD Status p1:/tmp/gpfs#>mmrestripefs TEAM02_AIX -b GPFS: 6027-589 Scanning file system metadata, phase 1 ... 3 % complete on Wed Oct 28 10:54:55 2009 7 % complete on Wed Oct 28 10:54:58 2009 9 % complete on Wed Oct 28 10:55:01 2009 13 % complete on Wed Oct 28 10:55:05 2009 16 % complete on Wed Oct 28 10:55:09 2009 20 % complete on Wed Oct 28 10:55:13 2009

78 % complete on Wed Oct 28 10:56:04 2009 82 % complete on Wed Oct 28 10:56:08 2009 86 % complete on Wed Oct 28 10:56:11 2009 90 % complete on Wed Oct 28 10:56:14 2009 93 % complete on Wed Oct 28 10:56:18 2009 97 % complete on Wed Oct 28 10:56:21 2009 100 % complete on Wed Oct 28 10:56:23 2009 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 2 ... 1 % complete on Wed Oct 28 10:56:31 2009 34 % complete on Wed Oct 28 10:56:34 2009 59 % complete on Wed Oct 28 10:56:37 2009 95 % complete on Wed Oct 28 10:56:41 2009 100 % complete on Wed Oct 28 10:56:41 2009 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 3 ... 31 % complete on Wed Oct 28 10:56:46 2009 59 % complete on Wed Oct 28 10:56:50 2009 87 % complete on Wed Oct 28 10:56:54 2009 100 % complete on Wed Oct 28 10:56:55 2009 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 4 ... GPFS: 6027-552 Scan completed successfully. GPFS: 6027-565 Scanning user file metadata ... GPFS: 6027-565 Scanning user file metadata ... 99 % complete on Tue Oct 27 21:25:37 2009 100 % complete on Tue Oct 27 21:42:01 2009 GPFS: 6027-552 Scan completed successfully.

This command is Volume resripe command for New NSD.

Page 104: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

104

p1:/tmp/gpfs#>mmdf TEAM02_AIX disk disk size failure holds holds free KB free KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------- Disks in storage pool: system (Maximum disk size allowed is 281 GB) gpfs1nsd 52428800 1 yes yes 29172480 ( 56%) 10984 ( 0%) gpfs2nsd 52428800 1 yes yes 29171200 ( 56%) 12704 ( 0%) gpfs3nsd 52428800 1 yes yes 29164544 ( 56%) 15296 ( 0%) gpfs4nsd 52428800 1 yes yes 29163008 ( 56%) 15352 ( 0%) gpfs5nsd 52428800 1 yes yes 29160960 ( 56%) 10720 ( 0%) gpfs9nsd 52428800 1 yes yes 29792256 ( 57%) 6592 ( 0%) ------------- -------------------- ------------------- (pool total) 314572800 175624448 ( 56%) 71648 ( 0%) ============= ==================== =================== (total) 314572800 175624448 ( 56%) 71648 ( 0%) Inode Information ----------------- Number of used inodes: 4096 Number of free inodes: 254464 Number of allocated inodes: 258560 Maximum number of inodes: 258560 p1:/tmp/gpfs#>

Remove NSD For remove gpfs1nsd, you must set to suspend for blocking disk IO. This command is mmchdisk.

Ex) command : mmchdisk TEAM02_AIX suspend –d gpfs1nsd Suspend option is change read only on assigned NSD. After check change suspend mode on NSD, use restripe command with –r option.

p1:/tmp/gpfs#>mmchdisk TEAM02_AIX suspend -d gpfs1nsd

Set to suspend NSD

p1:/tmp/gpfs#>mmlsdisk TEAM02_AIX disk driver sector failure holds holds storage name type size group metadata data status availability pool ------------ -------- ------ ------- -------- ----- ------------- ------------ ------------ gpfs1nsd nsd 512 1 yes yes suspended up system gpfs2nsd nsd 512 1 yes yes ready up system gpfs3nsd nsd 512 1 yes yes ready up system gpfs4nsd nsd 512 1 yes yes ready up system gpfs5nsd nsd 512 1 yes yes ready up system gpfs9nsd nsd 512 1 yes yes ready up system GPFS: 6027-741 Attention: Due to an earlier configuration change the file system may contain data that is at risk of being lost. p1:/tmp/gpfs#>

Check applied option

p1:/tmp/gpfs#>mmrestripefs TEAM02_AIX -r GPFS: 6027-589 Scanning file system metadata, phase 1 ... GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 2 ... GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 3 ... GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 4 ... GPFS: 6027-552 Scan completed successfully.

Page 105: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

105

GPFS: 6027-565 Scanning user file metadata ... 99 % complete on Tue Oct 27 21:49:23 2009 100 % complete on Tue Oct 27 22:01:31 2009 GPFS: 6027-552 Scan completed successfully.

Rebalance NSD Volume. This command is move data on nsd to other nsd from suspended NSD.

p1:/tmp/gpfs#>mmchdisk TEAM02_AIX stop -d gpfs1nsd p1:/tmp/gpfs#>

Stop NSD Service

p1:/tmp/gpfs#>mmlsdisk TEAM02_AIX disk driver sector failure holds holds storage name type size group metadata data status availability pool ------------ -------- ------ ------- -------- ----- ------------- ------------ ------------ gpfs1nsd nsd 512 1 yes yes suspended down system gpfs2nsd nsd 512 1 yes yes ready up system gpfs3nsd nsd 512 1 yes yes ready up system gpfs4nsd nsd 512 1 yes yes ready up system gpfs5nsd nsd 512 1 yes yes ready up system gpfs9nsd nsd 512 1 yes yes ready up system GPFS: 6027-739 Attention: Due to an earlier configuration change the file system is no longer properly balanced.

Check Shutdown gpfs1nsd p1:/tmp/gpfs#>mmdf TEAM02_AIX disk disk size failure holds holds free KB free KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------- Disks in storage pool: system (Maximum disk size allowed is 281 GB) gpfs1nsd 52428800 1 yes yes 52359936 (100%) 248 ( 0%) gpfs2nsd 52428800 1 yes yes 24581376 ( 47%) 15920 ( 0%) gpfs3nsd 52428800 1 yes yes 24487168 ( 47%) 16696 ( 0%) gpfs4nsd 52428800 1 yes yes 24495104 ( 47%) 17984 ( 0%) gpfs5nsd 52428800 1 yes yes 24557824 ( 47%) 14816 ( 0%) gpfs9nsd 52428800 1 yes yes 25102592 ( 48%) 11360 ( 0%) ------------- -------------------- ------------------- (pool total) 262144000 123224064 ( 47%) 76776 ( 0%) ============= ==================== =================== (total) 262144000 123224064 ( 47%) 76776 ( 0%) Inode Information ----------------- Number of used inodes: 4096 Number of free inodes: 254464 Number of allocated inodes: 258560 Maximum number of inodes: 258560

Check File System status

p1:/tmp/gpfs#>mmdeldisk TEAM02_AIX gpfs1nsd Deleting disks ... Scanning system storage pool GPFS: 6027-589 Scanning file system metadata, phase 1 ... GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 2 ... GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 3 ... GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 4 ... GPFS: 6027-552 Scan completed successfully.

Page 106: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

106

GPFS: 6027-565 Scanning user file metadata ... 100 % complete on Tue Oct 27 22:08:19 2009 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-379 Could not invalidate disk(s). Checking Allocation Map for storage pool 'system' GPFS: 6027-370 tsdeldisk64 completed. mmdeldisk: 6027-1371 Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.

Remove NSD Disk

p1:/tmp/gpfs#>mmlsdisk TEAM02_AIX disk driver sector failure holds holds storage name type size group metadata data status availability pool ------------ -------- ------ ------- -------- ----- ------------- ------------ ------------ gpfs2nsd nsd 512 1 yes yes ready up system gpfs3nsd nsd 512 1 yes yes ready up system gpfs4nsd nsd 512 1 yes yes ready up system gpfs5nsd nsd 512 1 yes yes ready up system gpfs9nsd nsd 512 1 yes yes ready up system GPFS: 6027-739 Attention: Due to an earlier configuration change the file system is no longer properly balanced.

Check NSD Configuration

p1:/tmp/gpfs#>dd if=/dev/null of=/dev/hdisk1 bs=1024k count=100 p1:/tmp/gpfs#>rmdev -rdl hdisk1 p1:/tmp/gpfs#>cfgmgr

If you reuse this device for NSD, must delete first block area on device. Usually, GPFS daemon writes nsd information on first disk area.

Page 107: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

107

18. Cross over GPFS Mount

This senario is 2 gpfs cluster will cross mount each own volume.

- AIX 2 Node Cluster Nodes c1, c2

- Linux 2 Node Cluster Nodes c3, c4

- Windows 1 Node Client Node c5 GPFS cluster information ======================== GPFS cluster name: team03-aix.c1 GPFS cluster id: 13358913210395628267 GPFS UID domain: team03-aix.c1 Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp GPFS cluster configuration servers: ----------------------------------- Primary server: c1 Secondary server: c2 Node Daemon node name IP address Admin node name Designation ----------------------------------------------------------------------------------------------- 1 c1 185.100.100.201 c1 quorum-manager

2 c2 185.100.100.202 c2 quorum-manager 5 c5 185.100.100.206 c5

File system Disk name NSD servers --------------------------------------------------------------------------- team3-aix team3_aix_nsd1 c1,c2 team3-aix team3_aix_nsd2 c1,c2 team3-aix team3_aix_nsd3 c1,c2 team3-aix team3_aix_nsd4 c1,c2 team3-aix team3_aix_tb1 (directly attached) team3-aix team3_aix_tb2 (directly attached) team3-aix team3_aix_tb3 (directly attached) team3-aix2 team3_aix2_nsd1 c1,c2 team3-aix2 team3_aix2_nsd2 c1,c2 team3-aix2 team3_aix2_nsd3 c1,c2 This volume will use cross mount Cluster 1 Configuration GPFS cluster information ======================== GPFS cluster name: team3_l.c3 GPFS cluster id: 13358913218985615388 GPFS UID domain: team3_l.c3 Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp

Page 108: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

108

GPFS cluster configuration servers: ----------------------------------- Primary server: c3 Secondary server: c4 Node Daemon node name IP address Admin node name Designation ----------------------------------------------------------------------------------------------- 1 c3 185.100.100.203 c3 quorum-manager 2 c4 185.100.100.204 c4 quorum-manager File system Disk name NSD servers --------------------------------------------------------------------------- team3-lix team3_lix_nsd1 c3,c4 team3-lix team3_lix_nsd2 c3,c4 team3-lix team3_lix_nsd3 c3,c4 This volume will use cross mount Cluster 2 Configuration

Generate Security Key on Cluster 1

Generate Security Key on Cluster 2

Copy Security Key to Cluster 1(AIX) from Cluster 2(Linux)

Copy Security Key to Cluster 2(Linux) from Cluster 1(AIX)

Add rsa key on each node. TIPS) mmauth add [Dest Cluster] –k 1 [rsa key file dest]

Page 109: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

109

Check status authrization information each node on AIX Cluster

Check status authrization information each node on Linux Cluster

C1:/ mmshutdown -a

AIX GPFS Cluster Shutdown

[root@c3 ] mmshutdown -a

Linux GPFS Cluster Shutdown

Change GPFS Cluster Configuration

Assign Grant Access Authority each Cluster TIP) mmauth grant [Dest Cluster Name] –f [Local GPFS Block Device] –a rw[read& write]

Page 110: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

110

Add Remote Cluster on each cluster. TIPS) mmremotecluster add [Dest Cluster Name] –n [Dest Cluster Node Name] –k [rsa key file]

Add File System TIPS) Mmremotefs add [Dest Cluster FS Device] –f [Local Device Name] –T [Mount Point] –A no –C [Remote Cluster Name]

Check Property Remote Cluster

Check Remote GPFS Volume

Page 111: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

111

19. Failure group and GPFS Replication

Do you want to use replication on GPFS? Then makes file system, Ready NSD Configuration for replication of file system. This algorithem is write block IO based on NSD failure group. # hdisk1:c1:c2:dataAndMetadata:1:team3_aix_nsd1 team3_aix_nsd1:::dataAndMetadata:1::

# hdisk2:c1:c2:dataAndMetadata:1:team3_aix_nsd2 team3_aix_nsd2:::dataAndMetadata:1::

# hdisk3:c1:c2:dataAndMetadata:1:team3_aix_nsd3 team3_aix_nsd3:::dataAndMetadata:1::

# hdisk4:c1:c2:dataAndMetadata:2:team3_aix_nsd4 team3_aix_nsd4:::dataAndMetadata:2::

# hdisk5:c1:c2:dataAndMetadata:2:team3_aix_nsd5 team3_aix_nsd5:::dataAndMetadata:2::

# hdisk6:c1:c2:dataAndMetadata:2:team3_aix_nsd6 team3_aix_nsd6:::dataAndMetadata:2::

# hdisk7:c1:c2:dataAndMetadata:3:team3_aix_nsd7 team3_aix_nsd7:::dataAndMetadata:3::

# hdisk8:c1:c2:dataAndMetadata:3:team3_aix_nsd8 team3_aix_nsd8:::dataAndMetadata:3::

# hdisk9:c1:c2:dataAndMetadata:3:team3_aix_nsd9 team3_aix_nsd9:::dataAndMetadata:3::

This is configuration for NSD Replication. Make a file system, and then copy a single file, that size 7GB. This file system use 14GB (450 – 14 = 436), why is write 2 times.

Page 112: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

112

Check current NSD configuration and disk usage Remove second failure group Check Remove Status

Page 113: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

113

Do not use rebalance command, it is just add new NSD, then changed failure group information of File system. For configure replication, minimum configuration is 3 set of storage box or 2 set of storage box with disk discriptor function, what is similar tiebreaker option.

Consideration Failover time on GPFS Cluster when is IO Service. HBA Driver option such as timeout value or failure recognition time Multipath driver option and configuration such as RDAC or MPIO Storage Cache Policy must be disable for Volume Integrity. Cableing Design with SAN Switch and AVT Funcition with Storage Box. All of the component driver module must be latest.

Page 114: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

114

20. End of This bp residency

Education System

Maybe next time of GPFS residency program topic will be GPFS/ILM Solution with IBM Tivoli Product. This diagram is integrated GPFS/TSM Architecture.

Page 115: 04.AdvGPFS Cross Platform

IBM Systems Enterprise Architecture – GPFS Solution

115

Trademarks IBM, the IBM Logo, BladeCenter, DS4000, eServer, and System x are trademarks of International Business Machines Corporation in the United States, other countries, or both. For a complete list of IBM Trademarks, see http://www.ibm.com/legal/copytrade.shtml. Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.