Setup Pt Cluster No De

  • Upload
    liew99

  • View
    223

  • Download
    0

Embed Size (px)

Citation preview

  • 8/10/2019 Setup Pt Cluster No De

    1/19

    1. Configure the node that is designated as Node A. In a single node configuration,this is the only node. In a cluster configuration, this is the lower node in the rack.The wiring and labeling performed by the installation personnel have been

    predicated on this. FSI mode is not clustered.2. Configure the backend storage from Node A. (Node B is powered off for

    this procedure.)3. Build the repository.4. Configure Node B. This step clusters the Node B with node A.5. Set the time and timeserver.6. Perform validation, if in a cluster.7. Build the Library, Storage Server (STS), or File System.

    1.2. Checkpoints

    Be sure that you are working on the right machine by checking the system serialnumber

    Use the command /usr/sbin/dmidecode -t 1. Start installation with the node which isphysically installed as primary node (lower node).

    [root@DYB-E4ZG2VTLN1-61 log]# dmidecode -t1 # dmidecode 2.11SMBIOS 2.7 present.

    Handle 0x0054, DMI type 1, 27 bytesSystem InformationManufacturer: IBM

    Product Name: Systemx3850 X5-[7143PEA]- Version: 06Serial Number: KQ2R7ADUUID:DC9E3336-BEF6-34D4-8A2D-DE5F3D17BC11 Wake-up Type: Power Switch

    SKU Number: Not SpecifiedFamily: System X

    Check LUN assignment:

    [root@DYB-E4ZG2VTLN1-61 log]# cat/proc/scsi/scsiAttached devices:

    Host: scsi0 Channel: 02 Id: 00 Lun: 00

    Vendor:IBM Model: ServeRAID M5015 Rev: 2.13Type:Direct-Access ANSI SCSI revision: 05Host: scsi3 Channel: 00 Id: 00 Lun: 00

    Vendor:IBM Model:2145 Rev: 0000Type:Direct-Access ANSI SCSI revision: 06Host: scsi3 Channel: 00 Id: 00 Lun: 01

    Vendor:IBM Model:2145 Rev: 0000Type:Direct-Access ANSI SCSI revision: 06

    Host: scsi3 Channel: 00 Id: 00 Lun: 02

    Vendor:IBM Model:2145 Rev: 0000Type:Direct-Access ANSI SCSI revision: 06

    Host: scsi3 Channel: 00 Id: 00 Lun: 03

    Vendor:IBM Model:2145 Rev: 0000Type:Direct-Access ANSI SCSI revision: 06Host: scsi3 Channel: 00 Id: 00 Lun: 04

  • 8/10/2019 Setup Pt Cluster No De

    2/19

    Vendor:IBM Model:2145 Rev: 0000

    Type:Direct-Access ANSI SCSI revision: 06Host: scsi3 Channel: 00 Id: 00 Lun: 05

    Vendor:IBM Model:2145 Rev: 0000Type:Direct-Access ANSI SCSI revision: 06

    Host: scsi3 Channel: 00 Id: 00 Lun: 06

    Vendor:IBM Model:2145 Rev: 0000Type:Direct-Access ANSI SCSI revision: 06

    Host: scsi3 Channel: 00 Id: 00 Lun: 07

    Vendor:IBM Model:2145 Rev: 0000Type:Direct-Access ANSI SCSI revision: 06

    Host: scsi3 Channel: 00 Id: 00 Lun: 08

    Vendor:IBM Model:2145 Rev: 0000

    ....

    Check that the devices are well managed by the Linux multipathd driver

    [root@DYB-E4ZG2VTLN1-61 ~]# multipath -llmpath2 (36005076802860cd64000000000000003) dm-6IBM,2145 [size=1.0T][features=1

    queue_if_no_path][hwhandler=0][rw]

    \_ round-robin 0 [prio=200][active]\_ 3:0:1:2 sdag 66:0 [active][ready] \_ 4:0:1:2 sdec 128:64 [active][ready] \_ 5:0:1:2 sdhy

    134:128 [active][ready] \_ 6:0:1:2 sdlu 68:448 [active][ready] \_ round-robin 0 [prio=40][enabled]

    \_ 4:0:0:2 sdcz 70:112 [active][ready] \_ 3:0:0:2 sdd 8:48 [active][ready] \_ 5:0:0:2 sdgv 132:176[active][ready] \_ 6:0:0:2 sdkr 66:496 [active][ready]

    mpath38(36005076802818bf8d800000000000009) dm-38 IBM,2145 [size=5.7T][features=1queue_if_no_path][hwhandler=0][rw]

    \_ round-robin 0 [prio=200][active]\_ 3:0:2:9 sdbq 68:64 [active][ready] \_ 4:0:2:9 sdfm 130:128 [active][ready] \_ 5:0:2:9 sdji 8:448

    [active][ready]\_ 6:0:2:9 sdne 71:256 [active][ready] \_ round-robin 0 [prio=40][enabled]

    \_ 3:0:3:9 sdcl 69:144 [active][ready] \_ 4:0:3:9 sdgh 131:208 [active][ready] \_ 5:0:3:9 sdkd 66:272[active][ready] \_ 6:0:3:9 sdnz 128:336 [active][ready]

    mpath23 (36005076802860cd64000000000000019) dm-27 IBM,2145 [size=5.7T][features=1queue_if_no_path][hwhandler=0][rw]

    \_ round-robin 0 [prio=200][active]\_ 4:0:0:23 sddu 71:192 [active][ready] \_ 5:0:0:23 sdhq 134:0 [active][ready] \_ 6:0:0:23 sdlm

    68:320 [active][ready] \_ 3:0:0:23 sdy 65:128 [active][ready] \_ round-robin 0 [prio=40][enabled]\_ 3:0:1:23 sdbb 67:80 [active][ready] \_ 4:0:1:23 sdex 129:144 [active][ready] \_ 5:0:1:23 sdit

    135:208 [active][ready] \_ 6:0:1:23 sdmp 70:272 [active][ready]mpath40 (36005076802818bf8d80000000000000b) dm-40 IBM,2145[size=5.7T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=200][active]\_ 3:0:2:11 sdbs 68:96 [active][ready] \_ 4:0:2:11 sdfo 130:160 [active][ready] \_ 5:0:2:11 sdjk

    8:480 [active][ready] \_ 6:0:2:11 sdng 71:288 [active][ready] \_ round-robin 0 [prio=40][enabled]

    \_ 3:0:3:11 sdcn 69:176 [active][ready] \_ 4:0:3:11 sdgj 131:240 [active][ready] \_ 5:0:3:11 sdkf66:304 [active][ready] \_ 6:0:3:11 sdob 128:368 [active][ready] ...

    1.3. TS7650G Node1 I nstall

    Attention: Before you begin ProtecTIER software configuration, confirm that theattached disk storage has been properly configured for use with the TS7650G. Failureto do so could result in the Red Hat Linux operating system having to be reinstalled on

  • 8/10/2019 Setup Pt Cluster No De

    3/19

    one or more of the TS7650G servers.

    Prerequisites:TCP ports 6520, 6530, 6540, 6550, 3501, and 3503 are open in the customer's firewall.

    Each ProtecTIER server being used for replication must allow TCP access through

    these ports.

    If you are in a dual nodes configuration, Second Node MUST BE powered-off.

    You have acquired, or know where to locate, the server information about the customer'sLAN and replication network.

    Note: The above information can be found on the completed IP Address Worksheet,located in the IBM System Storage TS7600 with ProtecTIER Introduction and PlanningGuide for the TS7650G (3958 DD4).

    Once you have validated those points, proceed on install/configuration of theTS7650G

    Connect to the system as user ptconfig (password ptconfig), the installer menu willappear automatically, if not, issue command menu

    Choose option 1) ProtecTier Configuration

    Then choose again option 1) Configure ProtecTIER node

  • 8/10/2019 Setup Pt Cluster No De

    4/19

    2. Backend Zoning information

    The zoning between the ProtecTIER nodes and V7000 follows the SAN zoningguidelines.

  • 8/10/2019 Setup Pt Cluster No De

    5/19

    3. File system layout creation

    Now we will create the file system, using the menu tool.

    Log on the primary node (node1) as ptadmin, then the menu tool starts automatically. If not,

    issue the menu command.

    Select the following options:

    | 1) ProtecTIER Configuration (...) |

    | 7) File Systems Management (...) |

    | 1) Configure file systems on all available devices |

    This option creates the file system layout on all available mpath defined on this node.

    Once the layout is done, use the fifth option in the file system menu as shown below, to

    display the file system layout:

  • 8/10/2019 Setup Pt Cluster No De

    6/19

    Once this task is done, proceed with the repository creation.

    4. RepositoryCreation 7.1. Planni ng

    We performed our repository size plan using the PT Planner Tool. (https://w3-connections.ibm.com/communities/service/html/communityview?communityUuid=c37424c5-7cf6-449f-8a36- b418c85c466f#fullpageWidgetId%3DWbb131d2c8fb0 4d46 94b62717bc55af9e%26file%3Df575b89f-3aad4e1d-85aa-ddce89c6e0a9)

    Below is the estimated output of ProtecTIER performance with the hardwarepurchased.

    Assumption of performance on Dedup ratio 8:1 and the overall throughput will be1100MB/s, running on available hardware (24x300GB-15K+105x1.2TB-NLSAS).

  • 8/10/2019 Setup Pt Cluster No De

    7/19

    4.1 Creation using PT manager GUI

    Here is the procedure to create and format the repository using the GUI Start the wizard

    using Menu> Repository> Create repository.

  • 8/10/2019 Setup Pt Cluster No De

    8/19

  • 8/10/2019 Setup Pt Cluster No De

    9/19

  • 8/10/2019 Setup Pt Cluster No De

    10/19

    Click finish to proceed with repository creation.

  • 8/10/2019 Setup Pt Cluster No De

    11/19

    In our configuration, the repository creation operation took 12 hours. This type ofoperation runs in background and prevents any other operation through the PT managerGUI for the involved PT cluster.

    4.3 Repository padding performance information

  • 8/10/2019 Setup Pt Cluster No De

    12/19

    In this chapter we give some statistics of storage utilization while formatting the

    repository.

    4.3V7000 UD + MD

    Activity when padding operation started: Showing MB/s

    Activity when padding operation is in progress (after one hour): Showing IOPS

    Showing I/O

  • 8/10/2019 Setup Pt Cluster No De

    13/19

    4.4 V7000 for UD only

    Activity when padding operation started: Showing MB/s

    Showing I/O

  • 8/10/2019 Setup Pt Cluster No De

    14/19

    Activity when padding operation is in progress (after one hour) Showing IOPS

    7.4. M anual distribution of the fi le system to createthe repository

    Usually the PT GUI manager assign the correct file system to the correct repositorypart (UD or MD), based on the size of the file system/LUNs.

    If the size of the LUNs prevents this to work as expected, you have to manuallydetermine the correct layout. Follow these steps.

    Check and research the the serial of RAID10 LUNs using the multipath ll commandon the protectier node. In our case, the LUNs that are RAID 10 formatted have 042 intheir serial number:

  • 8/10/2019 Setup Pt Cluster No De

    15/19

    [root@MOPB4VTLN142 sbin]# multipath -ll | grep mpath | grep "042.." mpath2(36005076306ffc06c00000000000 04202) dm-42 IBM,2107900mpath1 (36005076306ffc06c0000000000004201) dm-40 IBM,2107900mpath0 (36005076306ffc06c0000000000004200) dm-37 IBM,2107900

    Then, findthe link betweenPV (mpath) and LV(displayed inthe PT GUI) Command to use:

    pvdisplay /dev/mapper/mpath2p1

    [root@MOPB4VTLN142 sbin]# multipath -ll | grep mpath | grep "042.." | awk '{ print"pvdisplay /dev/mapper/"$1"p1 | grep -E \"PV Name|VG Name\" "}'

    pvdisplay /dev/mapper/mpath2p1 | grep -E "PV Name|VG Name" pvdisplay/dev/mapper/mpath1p1 | grep -E "PV Name|VG Name" pvdisplay /dev/mapper/mpath0p1 | grep -E "PV Name|VG Name"

    [root@MOPB4VTLN142 sbin]# pvdisplay /dev/mapper/mpath2p1 | grep -E "PV Name|VGName"

    PV Name /dev/mapper/mpath2p1VG Name vg0[root@MOPB4VTLN142 sbin]# pvdisplay /dev/mapper/mpath1p1 | grep -E "PV Name|VG

    Name"

    Then, use the PT manager GUI repository creation to distribute the correct LUNs in thecorrect part of the repository. RAID 10 as MD , others as UD members.

    5. Clustered configuration

    If you have a clustered environment, you have to configure the second node, and thenask it to join the cluster and finally perform the fence tests.

    5.1. Join the Node 2 within the cluster

    /!\ Notice that the NODE1might be fenced atNODE2 startup.

    Connect onNODE 1 and stop PT services, or it will be done via the menu tool from the

    NODE 2 whilst configuring the NODE 2.

    Once the services has been stopped... Continue on NODE 2.

    Connect on NODE 2 with ptconfig user and start the menuEnter your parameters when asked, about IP, mask, Gateway, name

  • 8/10/2019 Setup Pt Cluster No De

    16/19

  • 8/10/2019 Setup Pt Cluster No De

    17/19

    You can skip the THE FENCE TEST at installation and do it later on using the menu tool.AnswerQ at the fence test question to validate the cluster configuration.

    Below the figure of what occurs when selecting the Fence test during the second nodeconfiguration (Node 1 is fenced)

  • 8/10/2019 Setup Pt Cluster No De

    18/19

    To ensure the fence mechanism works well, the fence test has to be executed from eachnode of the cluster. This can be done at any time using the menu tool (Option 12)

    /!\ Caution, please not the following when you are doing reboot/fence tests: Avoid

    performing the 2 fence tests within 1 hour, to avoid this error:

    Tue May 15 23:08:41 CEST 2012 cmgnt 14103 (14103) 5020: The ClusterManager detected that there have been more than 2 unscheduled restarts in the last 60

    minutes. It has stopped the process of bringing up the services to prevent endlessreboot cycles.

    5.2. Validate the cluster configuration

    This chapter shows how to validate the cluster configuration. Simply using the menutool, Protection configuration then Validate configuration option.

  • 8/10/2019 Setup Pt Cluster No De

    19/19

    A notification of fence test appears in the notification center within the PT managerGUI. The software alert button turns red and blinks. Following message is reported: