View
8
Download
0
Category
Preview:
Citation preview
Facebook Lightning Hardware System
Specification V0.15
Authors:
Mike Yan, Hardware Engineer
Chris Petersen, Hardware System Engineer
2 February 5th, 2016
© 2016 Facebook.
As of Oct 24, 2014, the following persons or entities have made this Specification available under the Open Web Foundation Final Specification Agreement (OWFa 1.0), which is available at: http://www.openwebfoundation.org/legal/the-owf-1-0-agreements/owfa-1-0:
Facebook, Inc.
You can review the signed copies of the Open Web Foundation Agreement Version 1.0 for this Specification at http://opencompute.org/licensing/, which may also include additional parties to those listed above.
Your use of this Specification may be subject to other third party rights. THIS SPECIFICATION IS PROVIDED "AS IS." The contributors expressly disclaim any warranties (express, implied, or otherwise), including implied warranties of merchantability, noninfringement, fitness for a particular purpose, or title, related to the Specification. The entire risk as to implementing or otherwise using the Specification is assumed by the Specification implementer and user. IN NO EVENT WILL ANY PARTY BE LIABLE TO ANY OTHER PARTY FOR LOST PROFITS OR ANY FORM OF INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY CHARACTER FROM ANY CAUSES OF ACTION OF ANY KIND WITH RESPECT TO THIS SPECIFICATION OR ITS GOVERNING AGREEMENT, WHETHER BASED ON BREACH OF CONTRACT, TORT (INCLUDING NEGLIGENCE), OR OTHERWISE, AND WHETHER OR NOT THE OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
CONTRIBUTORS AND LICENSORS OF THIS SPECIFICATION MAY HAVE MENTIONED CERTAIN TECHNOLOGIES THAT ARE MERELY REFERENCED WITHIN THIS SPECIFICATION AND NOT LICENSED UNDER THE OWF CLA OR OWFa. THE FOLLOWING IS A LIST OF MERELY REFERENCED TECHNOLOGY: INTELLIGENT PLATFORM MANAGEMENT INTERFACE (IPMI), I2C TRADEMARK OF PHILLIPS SEMICONDUCTOR. IMPLEMENTATION OF THESE TECHNOLOGIES MAY BE SUBJECT TO THEIR OWN LEGAL TERMS.
Open Compute Project � Lightning � v0.15
http://opencompute.org 3
1 Contents 1 Contents...................................................................................................................................3
2 Scope........................................................................................................................................6
3 Overview..................................................................................................................................6
3.1 ReferenceDocument......................................................................................................6
4 LightningHardwareSystemOverview.....................................................................................7
4.1 LightningKeyFeatures....................................................................................................7
4.2 KeyAccessibleItems.......................................................................................................9
4.3 SystemComponentLayout...........................................................................................10
4.4 HostConnectionBlockDiagram...................................................................................12
4.5 PCIeBlockDiagram.......................................................................................................12
4.6 SystemI2CTopology.....................................................................................................14
4.7 PCIeClockTree.............................................................................................................15
4.8 PCIeSidebandSignals...................................................................................................17
4.9 PCIeLanesSignal...........................................................................................................18
4.10 SSDCartridgePowerControl......................................................................................18
5 LightningPCIeExpansionBoard.............................................................................................18
5.1 BlockDiagram...............................................................................................................18
5.2 PEBFormFactorandPlacement...................................................................................19
5.3 PCIeSwitchRequirements............................................................................................21
5.4 VoltageMonitor............................................................................................................22
5.5 Connectors....................................................................................................................22
5.6 SwitchandButtons.......................................................................................................25
5.7 LEDs..............................................................................................................................25
5.8 PCBStack-up.................................................................................................................27
6 LightningPCIeDrivePlaneBoard...........................................................................................27
6.1 BlockDiagramofPDPB.................................................................................................27
6.2 PDPBFormFactorandPlacement................................................................................28
6.3 ThermalSensors...........................................................................................................29
6.4 VoltageMonitor............................................................................................................30
4 February 5th, 2016
6.5 PCIeRe-drivers..............................................................................................................31
6.6 Connectors....................................................................................................................31
6.7 LEDs..............................................................................................................................34
6.8 PCBStack-up.................................................................................................................34
7 LightningPowerSystem.........................................................................................................35
7.1 BoardLevelPowerBudget............................................................................................35
7.2 SystemLevelPowerBudget..........................................................................................35
7.3 PEBBulkConverterSolutions.......................................................................................35
7.4 PowerSequencing........................................................................................................35
7.5 PowerButton................................................................................................................35
8 MechanicalofTrayandSSDCartridge...................................................................................36
8.1 ChangesforTrayLatches..............................................................................................36
8.2 SSDConnectorsandCartridges....................................................................................36
8.3 SingleSSDCartridge(15mm).......................................................................................37
8.4 SingleSSDCartridge(7mm).........................................................................................37
8.5 DualM.2ModuleCartridge..........................................................................................38
8.6 PCIeExpansionBoardPCBThickness............................................................................40
8.7 SilkScreen.....................................................................................................................40
8.8 PCBColor......................................................................................................................40
9 BaseboardManagementController(BMC)............................................................................40
9.1 Overview.......................................................................................................................40
9.2 HosttoLightningBMC(Prioritized)..............................................................................41
9.3 LightningBMCSupport.................................................................................................42
9.4 BMCHeartbeat.............................................................................................................42
9.5 BMCWatchdogTimer...................................................................................................43
9.6 BMCUART.....................................................................................................................43
9.7 DebugSupport..............................................................................................................43
9.8 BMCFirmwareUpdateApproach.................................................................................43
9.9 MultipleHosts...............................................................................................................44
9.10 ErrorCodedisplayonDebugCard..............................................................................44
10 HighLevelSystemConsideration...........................................................................................45
10.1 SupportedServers......................................................................................................45
Open Compute Project � Lightning � v0.15
http://opencompute.org 5
10.2 PCIeRe-timerCard.....................................................................................................45
10.3 PCIeCables.................................................................................................................49
11 EnvironmentalRequirementsandReliability.........................................................................50
11.1 EnvironmentalRequirements.....................................................................................50
11.2 VibrationandShock....................................................................................................50
11.3 MeanTimeBetweenFailures(MTBF)Requirements.................................................51
11.4 Regulations.................................................................................................................51
12 LabelsandMarkings...............................................................................................................51
12.1 PCBALabelsandMarkings..........................................................................................51
12.2 ChassisLabelsandMarkings.......................................................................................52
13 PrescribedMaterials..............................................................................................................52
13.1 SustainableMaterials.................................................................................................52
13.2 DisallowedComponents.............................................................................................52
13.3 CapacitorsandInductors............................................................................................53
13.4 ComponentDe-Rating................................................................................................53
14 AppendixA:InterconnectPinDefinitions..............................................................................53
14.1 PindefinitionsonPCIeExpansionBoard....................................................................53
15 AppendixB:ErrorCodeDefinition.........................................................................................61
16 RevisionHistory......................................................................................................................64
6 February 5th, 2016
2 Scope This document describes the hardware specification used in the design of Facebook's PCIe based storage unit, code named "Lightning".
This specification is still preliminary and subject to change.
3 Overview Lightning is a PCIe version of the Open Vault Storage of Open Compute Project. It supports up to 60 Solid State Drives (SSDs) connected via Gen3 PCIe links from an external host or up to four hosts.
The Lightning board set resides in the same 2U chassis of OCP Honey Badger storage server. The Honey Badger Drive Plane Board is replaced with a PCIe Drive Plane Board (PDPB) that supports PCIe SSDs. The Honey Badger Baseboard and Micro Server Card are replaced with the Lightning PCIe Expansion Board (PEB). This PEB connects 16 lanes of PCIe Gen3 to one, two or four server hosts and uses PCIe Gen3 switches to fan out to the SSDs on the PDPB.
Outside of the PDPB and PEB, the rest of the storage server is unchanged. This includes the six fan modules and fan control board (FCB), the power transition board (PTB), the bus bar clip, and the tray cable arms, etc. The tray latches remain unchanged. The PEB will use the same card guides as Honey Badger Baseboard, while the card latches will be changed to accommodate the increased insertion force by the new connectors between PEB and PDPB.
The mechanical design of the chassis is mostly unchanged, so the 15 bays for 3.5-inch Hard Disk Drives (HDDs) are utilized to connect SSD cartridges that contain either a single SSD, or two M.2 modules. The SSDs are small form factor (SFF) 2.5-inch drives, and the cartridge is used to hold the one SSD, or includes an adapter card to carry the two M.2 modules. The SSD cartridge is mounted and retained in the chassis in a similar way as Honey Badger / Knox. It could also support future 3.5-inch SSDs.
3.1 Reference Document
Below reference documents about Open Vault Storage and Honey Badger can be found on Open Compute Project website, storage track: http://www.opencompute.org/wiki/Specs_and_Designs_Page#Storage
• Open Compute Project, Facebook, Open Vault Storage Hardware Specification, v0.8, Jan 24, 2014. Web link: http://files.opencompute.org/oc/public.php?service=files&t=94978d74f940986f99e4282a424ad08f
• Open Compute Project, Facebook, Honey Badger – Light Weight Compute Module in Open Vault Storage, v0.8, Jan 28, 2015 Web link: http://files.opencompute.org/oc/public.php?service=files&t=5dc72e32ba081baf0a8a99b46183888c
Open Compute Project � Lightning � v0.15
http://opencompute.org 7
4 Lightning Hardware System Overview
4.1 Lightning Key Features
As a PCIe JBOD, Lightning system has below key features list:
• 16 lanes of PCIe Gen3 cabled to a single host, or two hosts (8 lanes each) or four hosts
(4 lanes each) (Stretch Goals)
ü Mini-SAS HD (SFF-8644) cable, x4 from each connector, cable could be single x4 or
“x8” (two x4 bounded together)
ü A custom version cable is needed to carry PCIe Clock, Reset and other sideband
signals (I2C, USB, etc.)
ü Host or hosts connection use a x16 PCIe expansion adapter (re-timer card)
• One or two SSDs per existing drive bay
ü NVMe compliant
ü Target power of <=14W for each drive bay
• SSD Cartridge 1:
ü Single SSD, x4 PCIe Gen3
ü 2.5” form factor, 15 mm thickness
• SSD Cartridge 2:
ü Single SSD, x4 PCIe Gen3
ü 2.5” form factor, 7 mm thickness
• SSD Cartridge 3:
ü Two M.2 flash modules with adapter card, x2 PCIe Gen3 to each
ü Support both 2280 and 22110 M.2 form factor
ü This dual-M.2 cartridge may be a single FRU for PCIe hot-plug, stretch goal is hot-
plug individually
• PCIe switch supports:
ü PCIe hot-plug: fail, remove, replace, and surprise removal
ü PCIe Downstream Port Containment (DPC)
ü PCIe Extended Downstream Port Containment (eDPC)
ü One host (x16) or two hosts (x8 each) or four (x4) hosts per x16 upstream
connection
ü Up to 2x x16 upstream ports for a total of 32x upstream lanes
ü Up to 30 SSD devices are downstream of the switch, x2 to each device; Or up to 15
SSD Cartridges, x4 to each drive bay
ü Each SSD Cartridge is assigned and mapped to only one host (i.e. there is no
“pooling” of storage)
ü There is no host to host communication through the switch
• A BMC running OpenBMC firmware on PEB will provide the enclosure management
functionality on the Lightning tray and chassis
ü One BMC for each PEB, responsible for tray level management functionality
8 February 5th, 2016
ü Two BMCs on two trays work together for chassis level management
functionality, the same way as Open Vault Storage (Knox)
ü Each BMC will be accessible by the host CPU subsystem (in-band management),
through a USB bus routed on the custom version of Mini-SAS HD cable; for
current OCP server, there will be a USB cable between the rear USB connector on
server motherboard and a USB connector on re-timer card
ü Each BMC will be accessible by the host BMC (OOB management), through an I2C
bus routed on the customized Mini-SAS HD cable
ü Host or hosts BMC will communicate with the Lightning BMC via IPMB, USB, or
PCIe (Will be finalized later)
ü AST2400 is adopted for EVT, connecting PCIe x1 from BMC to PCIe switch, BMC
acting as an end device only
ü MCTP protocol is being considered as future in-band management approach
between host / hosts and BMC (Stretch Goal)
ü Each BMC could support up to 4x host connections for I2C and PCIe (MCTP)
connection later (Stretch Goal)
• Mechanically, Lightning tray is compatible with Honey Badger tray that accepts
Baseboard, which also supports Knox tray configuration with single SEB on A side.
Same sheet metal shall work for all 3x designs, except:
ü Due to the increased insertion force by the new connectors between PEB and
PDPB, the card latches in the front will be changed accordingly
• System airflow “CFM per Watt” not to exceed previous generation (Honey Badger
light weight Storage Server)
Figure 1 shows an overview of the Lightning hardware system.
Open Compute Project � Lightning � v0.15
http://opencompute.org 9
Figure 1 Lightning Hardware System Overview
4.2 Key Accessible Items
List of front-accessible Lightning Hardware System components:
• Two SSD Trays • Two PCIe Expansion Boards (PEB), one for each tray
The PEB includes the following items:
• Up to eight Mini-SAS HD Connectors, SFF-8644 form factor • Power and status LEDs for system information • Four LEDs for uplink indicator • Three LEDs for switch zoning indicator • One OCP Debug Header • One USB 2.0 port, reserved for BMC debug
Figure 2 shows the details of Lightning PEB with accessible items (example only).
10 February 5th, 2016
Figure 2 Lightning PEB
4.3 System Component Layout
Figure 3 shows the major system components layout from top view of Lightning hardware system.
Open Compute Project � Lightning � v0.15
http://opencompute.org 11
Figure 3 Lightning System Components Layout
12 February 5th, 2016
4.4 Host Connection Block Diagram
Figure 4 shows the overall concept of Lightning host connection. Each of the custom Mini-SAS HD cable carries below signals:
• PCIe Gen3 x4 lanes • PCIe sideband signals: Clock, Reset • USB bus for in-band management • I2C bus for OOB management
The following Mini-SAS HD cable lengths will be tested:
• 1.5 meters • 2 meters
Figure 4 Lightning To Host Connection Block Diagram
4.5 PCIe Block Diagram
There are at least two possible PCIe switch configurations for the Lightning PEB. These require no hardware changes and can be done simply by loading a different switch configuration file.
Figure 5 shows the PCIe block diagram with 15x SSDs and each SSD is connected using the full x4 PCIe connection. The upstream connection supports up to 32 lanes to the host(s).
Host#4
Host#3
Host#2
Host#1
PCIeExpansionBoard
Mini-SASHDCable
8x8or15x4/30x2DownStream
BMC
PCIeSwitch
PCIeRe-timer
I2CMux
4x4UpStream
ToHostsToLocal:PEB/PDPB/FCB
...BMC
CPUCPU
Mini-SASHDCable
PCIeI2C
Mini-SASHDCableMini-SASHDCable
A
B
C
D
USB
USB
USBCable
USBHub
Open Compute Project � Lightning � v0.15
http://opencompute.org 13
Figure 5 Lightning PCIe Block Diagram, x4 connections
Figure 6 shows the PCIe block diagram with 30x SSDs and each SSD is connected using the a x2 PCIe connection. A carrier card splits the x4 PCIe connection into two separate x2 connectors. The upstream connection supports up to 32 lanes to the host(s).
In both solutions, there is one PCIe lane connected between BMC and PCIe switch, for PCIe in-band management purpose.
According to PCIe signal integrity simulation results, PCIe re-drivers could be needed on PDPB. Also in order to support the dual-M.2 cartridge, the re-drivers are required to support one x4 port and also can be bifurcated into two x2 ports. The needs of PCIe re-drivers will be re-evaluated during EVT.
14 February 5th, 2016
Figure 6 Lightning PCIe Block Diagram, x2 connections
4.6 System I2C Topology
Figure 7 shows the system I2C topology of Lightning. This mainly reflects the enclosure management structure of Lightning hardware system. The BMC chip has a total of 14 I2C buses. A summary of the I2C bus segments implemented in Lightning PEB and system is as follows:
Common portion: • I2C_A: Combined with I2C_D, both are legacy from Honey Badger. To PEB local
devices. • I2C_B: Legacy from HB. To PDPB devices. • I2C_C: Legacy from HB. Shared control of FCB by both BMCs in the chassis. • I2C_Mini-SAS_1/2/3/4/11/12/13/14: New. Host connections • I2C_8 and I2C_9: New. To PDPB for all SSD related: SMBus connections of each
SSD; Clock buffers; PCIe re-drivers (if needed, aren’t shown). • I2C_10: New. Slave port of PCIe switch
Open Compute Project � Lightning � v0.15
http://opencompute.org 15
Figure 7 Lightning System I2C Topology
4.7 PCIe Clock Tree
There are different ways to implement PCIe clock scheme. Figure 8 shows the first option, and Figure 9 shows the second option. For the first option, clocks going to the PDPB for all SSDs are driven directly from the clock generator. While for the second option, the clocks going to the PDPB for all SSDs can be driven directly by the PCIe switch.
For both solutions:
• Four clock buffers on the PDPB will be needed to fan out from 4x clocks to 15x / 30x clocks.
• SSC clock mode may be required but default is Non-SSC (may be changed in accordance with EMI requirements for Lightning hardware system).
• If there’s any limitation per vendor solution, SSC clock within Lightning system is optional only.
16 February 5th, 2016
Figure 8 Lightning PCIe Clock Tree Option 1
Open Compute Project � Lightning � v0.15
http://opencompute.org 17
Figure 9 Lightning PCIe Clock Tree Option 2
4.8 PCIe Sideband Signals
PCIe sideband signals are critical to support hot-plug functionality, especially the IFDet# (aka PRESENT) from each SSD. Due to this consideration, all sideband signals should be connected to PCIe switch(es). The other three signals are PCIe Reset, Attention LED, and Power Control. Different switches may require slightly different specific implementations.
Figure 11 shows the basic sideband signal connections using one GPIO expander as an example.
More considerations:
• For both solutions, to keep the same design for PDPB, all the GPIO expanders must be placed on the PEB.
18 February 5th, 2016
Figure 10 SSD Sideband Signals
4.9 PCIe Lanes Signal
PCIe lane reversal and differential pair polarity reversal must be supported, but should be implemented carefully. For example, if signal polarity reversal is implemented it must be done in both TX and RX directions for the same PCIe lanes.
4.10 SSD Cartridge Power Control
Each SSD slot must include the ability to control power to the slot. This allows the user to power cycle each SSD slot individually if needed. However, by default, when the system powers on or an SSD is inserted into a powered system, the SSD must power up and train the PCIe link without intervention.
5 Lightning PCIe Expansion Board
5.1 Block Diagram
Figure 11 illustrates the functional block diagram of the Lightning PCIe Expansion Board (PEB).
Open Compute Project � Lightning � v0.15
http://opencompute.org 19
Figure 11 Lightning PEB Block Diagram
Notes:
• I2C header for the PCIe switch should be reserved on board for debug purposes.
5.2 PEB Form Factor and Placement
The maximum dimensions of the PCIe Expansion Board form factor are 354 mm x 251 mm (same as Honey Badger Baseboard), but it may be possible to shorten the 251mm dimension. Figure 12 illustrates the outline and keep out area, etc. Note that the goal is to minimize the depth of the board since it is not anticipated that the depth required of Honey Badger will be required for the Lightning PEB.
Placement and routing is heavily restricted by the compatibility requirements of the main Lightning system. The PEB will need to slide into the chassis using the same guide rails as the Honey Badger Baseboard. In addition, the space available on the board for external connectors, PCIe switch, and other larger components is restricted to the section of the board that clears the front of the chassis, since the board space that sits under the chassis sheet metal is severely restricted in terms of component height and
20 February 5th, 2016
airflow, etc.
Figure 12 Lightning PEB PCB Dimension
Figure 13 shows the key components placement and layout analysis for PCIe ports assignment and lanes direction, etc.
Main considerations:
• The PCIe switch location and orientation should be considered to: (1), Minimize PCIe trace length to edge connectors, so that the PCIe total trace length together with the portion on PDPB; (2), Optimize PCIe trace length to all downstream connectors and Mini-SAS HD connectors to Host(s).
• One set of four Mini-SAS HD connectors on the left side of the PEB • One set of four Mini-SAS HD connectors on the right side of the PEB • Debug header, LEDs in the middle portion of PEB • USB connector to the left portion of PEB
Open Compute Project � Lightning � v0.15
http://opencompute.org 21
Figure 13 Lightning PEB Placement
5.3 PCIe Switch Requirements
The PCIe switch is a full non-blocking, low-latency, low-cost and low-power PCIe Gen3 Multi-Root switch. Requirements for the PCIe switch are shown below:
• PCIe Base Specification 3.1 compliant • Multi-root support: up to four upstream ports and at least 4x switch partitions • High performance: cut-thru packet latency <170 ns; non-blocking crossbar • QoS support • SSC isolation support • RAS features and error containment • Out-of-band initialization options: Serial EEPROM, I2C and SMBus • Downstream Port Containment (DPC) support • Hot-and-surprise-plug controllers per port • Inband PCIe switch programming • Support for x2 PCIe bi-furcation • On-chip PCIe eye capture and diagnostic capabilities
22 February 5th, 2016
5.4 Voltage Monitor
A voltage monitor is required for the Lightning PCIe Expansion Board in order to ensure proper operation of all power rails at all times. The voltages will be reported as part of the enclosure status as described in Chapter 9 (Baseboard Management Controller). The voltage rails to be monitored are shown as below:
Table 1 Monitored Voltage Rails on PEB
Power Rail Voltage
P1V8_STBY 1.8V
P3V3_STBY 3.3V
P5V_STBY 5V
P12V 12V
Switch voltage(s) X.XV
5.5 Connectors
Sections below describe the connectors that reside on the Lightning PCIe Expansion Board.
5.5.1 PEB to DPB Connector and Pinout
The card edge connectors that connect the PEB and the PDPB are different for Lightning, and allow the routing of 60 lanes of PCIe to the PDPB in order to support x4 lanes per SSD bay. A single SSD connects these x4 lanes of PCIe, while dual-M.2 cartridge connects x2 lanes of PCIe per drive.
The PEB to DPB connectors are Amphenol 200-pin 0.8mm straddle-mount connectors (G639f200X2143XHR). The design uses 3 of these connectors for both signal and power connections.
Figure 14 PCI-E 0.8mm Edge Golden Fingers
Due to the large number of signals, the full connector pin definition is provided in Section 14.1, part I, part II and part III.
Open Compute Project � Lightning � v0.15
http://opencompute.org 23
5.5.2 External Mini-SAS HD Connector
Lightning PCIe Expansion Board interfaces with a server head node via external Mini-SAS HD connectors. It’s SFF-8644 standard form factor. Total quantity of host connectors will be up to eight. Figure 15 shows the concept with either a single x4 cable connector, or a dual x4 (aka x8) cable connectors.
Figure 15 Mini-SAS HD Connector
The pinout of host connectors has been modified from a standard Mini-SAS HD connection to allow for a sideband signaling. Each of the x4 connectors has exactly the same pinout. Please refer Table 2 for the final version pinout that will be potentially aligned with the proposed standard by PCI-SIG.
24 February 5th, 2016
Table 2 Mini-SAS HD Connector and Cable Pinout
5.5.3 Debug Header
The PCIe Expansion Board includes a debug header on the front side. It supports hot-plugging for an OCP debug card. The card contains below functionalities:
• Two 7-segment LED displays: Show BMC firmware POST information and system error code defined in Sec 9.9.
• One RS-232 serial connector: Provides BMC console connection. • One reset switch: Triggers a BMC reset when pressed.
LENGTH "A"
2X 200APPROX
25 APPROX
76mm MIN ALLOWANCEREQUIRED FOR BENDING CABLE
37.5mm
R27mm MINBEND RADIUS
DIAMETER 'B'
REV
INTERPRET PER ASME Y14.5
CODE IDENT 31413
UNLESS OTHERWISE SPECIFIED
ANGLES
0.00
DWG NO.
Amphenol AHSP
200 Innovative Way, Suite 201, Nashua, N.H. 03062 (603) 879-3000
A Division of Amphenol Corporation
±
±
TOLERANCES
0.0
DWG NO.
REV
SH
TITLE
DRAWING NO.
SIZE D
PART NO.
OF
REV
REV
SHEET 1SCALE
DIMENSIONS ARE IN mm
CHK
APVD
DWN
±
±
SH
DD
8 34567
D
C
B
1
PCIE Gen3, 100 OHM, 30 AWG
2
10/23/2015
2
CP
SEE TABLE 1
C-NEETCT-F499
C-NEETCT-F499
1
1:1
0.1
0.08
AHSP_szD_mm.drwdot Rev01/201409
A
D
B
C
56
1
C-NEETCT-F499 1 2
10/23/2015
10/23/2015
DC
2
7 48 3 2
1.0 CABLE ASSEMBLY, HD MSAS,
DRAWING
SCR# LTR DESCRIPTION DATE APPR
NOTES: UNLESS OTHERWISE SPECIFIED
1 PRELIMINARY DESIGN 10/23/2015 DC
1. THIS DRAWING IS RESERVED FOR FACEBOOK (NO CUSTOMER DRAWING).
5. CABLE ASSEMBLIES, COMPONENTS AND SUB-COMPONENTS SHALL BE COMPLIANT TO
MEET AMPHENOL SPECIFICATIONS.
3. CABLE ASSEMBLIES SHALL BE TESTED PER AMPHENOL SPECIFICATIONS AND PROCEDURES.
4. CABLE ASSEMBLIES SHALL BE COMPLIANT TO SAS-3.0 SPECIFICATION AT MINIMUM.
2. CABLE ASSEMBLIES, COMPONENTS, SUB-COMPONENTS, MATERIALS AND MFG PROCESSES MUST
EU DIRECTIVE 2011/65/EU-ROHS2.
2 RVS W/D 10/28/2015 DC
CUSTOMER USE N/A
AHSP P/N LENGTH "A" DIAMTER "B"NEETCT-F401 1000 ±20 7.0' ±1.0"NEETCT-F402 1500 ±30 7.0' ±1.0"NEETCT-F403 2000 ±40 7.0' ±1.0"
1
2
3
4
5
TABLE 2
TABLE 1
ITEM No. DESCRIPTION
SINGLE TWIN-AX PAIR CABLE, 30 AWG
TWISTED PAIR CABLE, 30 AWG
SINGLE INSULATED WIRE, 30 AWG
PLUG, HDMSAS - REFERENCE SFF-8644
LABEL, WRAPPED - P/N & REV, DATE CODE, COUNTRY OF ORIGIN AND SERIAL NUMBER
LANYARD, HDMSAS, PANTONE 375C GREEN, UL 94V-0
LABEL, FLAGGED - RoHS2
BLACK MESH SLEEVING
67 6
6
55
4
1
P1-Y P2-Z
A1
B1
C1
D1
A9B9
C9
D9
A1B1
C1
D1
A9
B9
C9
D9
TYP PINOUTTYP PINOUT
P1-Z P2-Y
Z Y Z Y
* - P2-Z WIRING IS THE SAME AS P2-Y
* - P1-Z WIRING IS THE SAME AS P1-Y
RX1+
GNDTX3-TX3+GNDTX1-TX1+GNDPCIE_CLOCK_DNPCIE_CLOCK_DPPCIE_RESET_NCABLE_PRESENT_NGNDTX0+TX0-GNDTX2+TX2-GNDGNDRX3-RX3+GNDRX1-RX1+GNDSDASCLUSB_DNUSB_DPGNDRX0+RX0-GNDRX2+RX2-GND
GNDRX3-RX3+GNDRX1-
GNDPCIE_CLOCK_DNPCIE_CLOCK_DP
PCIE_RESET_NCABLE_PRESENT_N
GNDRX0+RX0-GNDRX2+RX2-GNDGNDTX3-TX3+GNDTX1-TX1+GNDSDASCL
USB_DNUSB_DP
GNDTX0+TX0-
GNDTX2+TX2-
GND
SHELLSHELLC9C8C7C6C5C4C3A2A1B1B2D3D4D5D6D7D8D9A9A8A7A6A5A4A3C2C1D1D2B3B4B5B6B7B8
A9A8A7A6A5
A3A2A1B1B2B3B4B5B6B7B8B9C9C8C7C6C5C4C3C2C1D1D2D3D4D5D6D7D8
B9D9
P2-Y*P1-Y*
A4
FACEPLATE(BEZEL)
CABINET DOOR(WALL)
HD MSAS MIN BEND ALLOWANCEFROM FACEPLATE
BAGGED CABLE ASSYNOT TO SCALE
7
8
2 3 8
Open Compute Project � Lightning � v0.15
http://opencompute.org 25
The connector for debug header is a 14-pin, shrouded, right-angled, 2mm pitch connector. Figure 16 shows an illustration. The debug card has a key to match with the notch to avoid pin shift when plugging it in.
Figure 16 Debug Header Il lustration
Table 3 lists pin definition of the debug header:
Table 3 Debug Header Pin-Out
Pin(CKT) Function
1 LowHEXcharacter[0]leastsignificantbit2 LowHEXcharacter[1]3 LowHEXcharacter[2]4 LowHEXcharacter[3]mostsignificantbit5 HighHEXcharacter[0]leastsignificantbit6 HighHEXcharacter[1]7 HighHEXcharacter[2]8 HighHEXcharacter[3]mostsignificantbit9 SerialTransmit10 SerialReceive11 SystemReset12 SerialPortSelect(1=Switch;0=BMC)13 GND14 VCC(+5VDC)
5.6 Switch and Buttons
The PCIe Expansion Board will have one push button together with a slide switch, to trigger a reset signal to either the BMC, or the PCIe switch.
Per the default setting of slide switch, the reset button will trigger a reset to BMC.
A yellow LED is located close to / behind the slide switch, to indicate when the reset to the PCIe switch is selected.
5.7 LEDs
The PCIe Expansion Board will have several LEDs on its front edge to display various
26 February 5th, 2016
information as shown below:
• One single color in blue, for power and system identify, refer to Table 4 • One single color in red, for enclosure fault status, refer to Table 5 • Four single color in green per x16 uplink, for uplink indicator, refer to Table 6 • Three single color in green per x16 uplink, for switch zoning indicator, refer to
Table 7 (Stretch Goal) • One single color in yellow, for PCIe switch heartbeat (blinking) • One single color in yellow, for BMC heartbeat (blinking)
The following Table 4 and 5 summarize the definition of LED behaviors.
Table 4 Power and System Identify LED
Power and System Identify Blue LED
Power off, System Identify off Consistently off
Power off, System Identify on On 0.1sec, off 0.9sec, and loop
Power on, System Identify off Consistently on
Power on, System Identify on On 0.9sec, off 0.1sec, and loop
Table 5 Enclosure Fault Status LED
Enclosure Fault Status Red LED
Normal system operation Off
Any fault in whole enclosure On
Reserved for future use Blinking
Table 6 LED Indicator for Uplink
Uplink Status Green LED 1 Green LED 2 Green LED 3 Green LED 4
X16 ON OFF OFF OFF
X8 ON ON OFF OFF
X4 ON ON ON ON
Table 7 LED Indicator for Switch Zoning
Mini-SAS Port Link Status
Green LED 1 Green LED 2 Green LED 3
1x16 ON OFF OFF
2x8 OFF ON OFF
4x4 OFF OFF ON
5.7.1 Implementation of enclosure status LED
The enclosure fault LED (Red) should be designed to meet following scenarios:
Open Compute Project � Lightning � v0.15
http://opencompute.org 27
• If BMC firmware hangs, the enclosure fault LED (Red) will be turned ON. • If BMC firmware runs normally, it will turn on the enclosure fault LED (Red) for
any fault / warning within the whole system.
5.8 PCB Stack-up
Table 8 below shows an example of PEB stack-up, where low-loss PCB material is chosen to ensure that we can drive the PCIe signals at 8GT/s without requiring re-drivers. Also the detailed information of impedance control are listed for each layer. Depends on final solution, this section is subject to change.
Table 8 PCB Stack-up and Impedance Control for PEB
6 Lightning PCIe Drive Plane Board
6.1 Block Diagram of PDPB
Figure 17 illustrates the functional block diagram of the PCIe drive plane board (PDPB) for Lightning.
28 February 5th, 2016
Conn1:PowerandSignals Conn2:Signals Conn3:Signals
SSDC0
SSDAttenLED
SSDC1
SSDC2
SSDC3
SSDC4
SSDC5
SSDC6
SSDC7
SSDC8
SSDC9
SSDC10
SSDC11
SSDC12
SSDC13
SSDC14
ToPTBthenFCB
Re-driverasneeded
SSDIDENT
PCIeGen3
PCIeGen3 PCIe
Gen3
Re-driverasneeded
SSDPowerControl
EE-PROM
VoltageMonitor
Temp.Sensor1
Temp.Sensor2
Temp.Sensor3
Temp.Sensor4
I2C_C,PWM,GPIOs
HighPowerCardEdgeConnector
PCIeReset
I2C_B
Figure 17 Lightning PCIe Drive Plane Board Block Diagram
6.2 PDPB Form Factor and Placement
The Drive Plane Board form factor is 270mm x 510mm, same as Honey Badger / Knox. Figure 18 illustrates the form factor and rough placement.
The PCIe Drive Plane Board (PDPB) will be sized almost identically to the Knox and Honey Badger DPBs with the exception of the card edge connectors. See Section 6.2 of the OCP Open Vault Storage Specification v0.8 for the board placement drawing, and
Open Compute Project � Lightning � v0.15
http://opencompute.org 29
Section 5.3 of the OCP Honey Badger Specification v0.8 for modifications made for that design. The two sets of PCIe card edge connectors (for the two SAS Expansion boards or the Honey Badger Baseboard) are replaced with higher density card-edge connectors for the Lightning PDPB. The locations of all other connectors are exactly the same.
Figure 18 Lightning PCIe Drive Plane Board Placement
6.3 Thermal Sensors
Same as required in Honey Badger / Knox, LM75 or equivalent components will be placed in locations as shown in Figure 19. For further details, see thermal requirement section.
30 February 5th, 2016
Figure 19 Drive Plane Board Thermal Sensor Locations
6.4 Voltage Monitor
A voltage monitor is required for the Lightning PCIe drive plane board in order to ensure proper operation of all critical power rails at all times. The voltages will be reported as part of the enclosure status as described in Chapter 9 (Baseboard Management Controller). The voltage rails to be monitored are shown in Table 9 below.
Table 9 Monitored Voltage Rails on PDPB
Power Rail Voltage
SSDC5 SSDC6 SSDC7 SSDC8 SSDC9
SSDC0 SSDC1 SSDC2 SSDC3 SSDC4
SSDC11 SSDC14SSDC12SSDC10
PCIeDrivePlaneBoard
PCIeExpansionBoard
SSDC13
FanControlBoard
FAN1FAN2FAN3FAN5FAN6 FAN4
PowerTransitionBoard
Open Compute Project � Lightning � v0.15
http://opencompute.org 31
P12V_PCIe 12.0V
P3V3 3.3V
P2V5 2.5V
P12V_SSD14 12.0V
6.5 PCIe Re-drivers
PCIe Gen3 re-drivers may be needed on the longest traces from the PCIe switch on the PEB to the SSDs on PDPB. PCIe re-drivers can be placed on either PEB or PDPB based on signal integrity simulation results. Or alternatively, different low loss PCB materials may be used.
For any re-driver (or repeater) on the PCIe Drive Plane board, it is recommended that debug headers, strap resistors, or programmable EPROM devices be used so that critical settings or errata fixes can be easily loaded into the device as validation and tuning dictate.
The recommended device is a Maxim MAX24104ELT device.
Key feature lists:
• Support for both x4 and 2x x2 PCIe devices
6.6 Connectors
Sections below describe the connectors that reside on the Lightning PCIe drive plane board.
6.6.1 Signal Connector to PCIe Expansion Board
Please refer to Section 5.5.1, the PEB to DPB connectors are Amphenol 200-pin 0.8mm straddle-mount connectors (G639f200X2143XHR). The design uses 3 of this connector for both signal and power connections.
32 February 5th, 2016
Figure 20 I l lustratioin of 200 pin Straddle Connector
Due to the large number of signals, the full connector pin definition is provided in Section 14.1, part I, part II and part III.
6.6.2 SSD Connectors (SFF-8639)
The Lightning PCIe drive plane board will use standard SFF-8639 connector to connect PCIe SSD. SFF-8639 is also known as U.2 form factor. Connector illustration is shown in Figure 23, and pin out assignment is shown in Table 10.
Open Compute Project � Lightning � v0.15
http://opencompute.org 33
Figure 21 Right-Angle SSD Connector
Table 10 SSD Connector Pin-Out
34 February 5th, 2016
6.7 LEDs
On PCIe drive plane board, each SSD will have one bi-color LED to indicate its status, both driven by PCIe Switch and SSD itself. Please refer to Section 4.8 Figure 10, and Table 11 summarizes the conditions the LEDs are to represent:
• Blue LED is driven by SSD ACTLED (pin P11), desired behaviors are as below, but it may follow vendors’ definition:
o When the SSD is online and healthy, turn on the Blue LED; o When the SSD is active, behavior follow vendor’s definition.
• Red LED is driven by PCIe Switch ATTLED#: o When there’s any fault for the SSD, turn on the Red LED;
• When there’s no SSD inserted, turn off both LEDs.
Table 11 Drive plane board LED for HDD Status
Solid State Drive Status
Blue LED Red LED
No Drive Inserted OFF OFF
Drive Online and Healthy ON OFF
Drive is Active Follow Vendor’s definition OFF
Drive Failure OFF ON
Drive Identify OFF Blinking
6.8 PCB Stack-up
Table 12 below shows an example of PDPB stack-up, where low-loss PCB material is chosen to ensure that we can drive the PCIe signals at 8GT/s without requiring re-drivers. Also the detailed information of impedance control are listed for each layer. Depends on final solution, this section is subject to change.
Table 12 PCB Stack-up and Impedance Control for PDPB
Open Compute Project � Lightning � v0.15
http://opencompute.org 35
7 Lightning Power System
7.1 Board Level Power Budget
The overall power consumption of the Lightning PEB is <= 130W.
7.2 System Level Power Budget
The overall power consumption of the Lightning hardware system is <= 770W.
7.3 PEB Bulk Converter Solutions
7.3.1 Key Power Rail Buck Converters
For all key power rails’ buck converter residing on the PCIe Expansion Board is to be designed with a 93% efficiency target for normal load. Selection of components for this solution is to be determined by the vendor. If a higher efficiency is available at additional cost, the vendor shall present those options to Facebook.
7.4 Power Sequencing
The design should follow key IC’s power-up and power-down sequence requirements without any violations, and ensure the power cycling with adequate reliability. Power sequence also needs to be considered between Lightning PEB and PDPB, where the PCIe switch and all end devices sit separately.
7.5 Power Button
There won’t be a power button on PEB and / or PDPB. The system shall always be in
36 February 5th, 2016
operation once power is supplied, or resume operation upon restoration of power in a power failure event.
There is a power button on PTB of Lightning / Honey Badger, and it’s always kept on. The use of that power button shall not be required to power on the system. It’s only intended to enable an easy way to power cycle an entire tray during tray level service.
8 Mechanical of Tray and SSD Cartridge
8.1 Changes for Tray Latches
The mechanical design of the chassis is mostly unchanged, this includes the six fan modules and fan control board (FCB), the power transition board (PTB), the bus bar clip, and the tray cable arms, etc. The tray latches remain unchanged, and the PEB will use the same card latches and same approximate form factor as the Honey Badger Baseboard.
But the card latches will be changed to accommodate the increased insertion force by the new connectors between PEB and PDPB. Jack Screw will be used, and the card latch will be modified accordingly. Figure 22 shows the concept in development.
Figure 22 Card Latches and Jack Screw
8.2 SSD Connectors and Cartridges
The SSD connectors will be SFF-8639 connectors (aka U.2). There are three SSD cartridges supported by the Lightning PCIe Drive Plane Board. Both cartridges support placing the 2.5-inch PCIe SSDs in the existing 3.5-inch hard disk drive bay supported by the Knox / Honey Badger mechanical design. The PDPB connectors for the SSD cartridge is the SFF-8639 connector which supports 4 lanes of PCIe.
Open Compute Project � Lightning � v0.15
http://opencompute.org 37
8.3 Single SSD Cartridge (15 mm)
The single SSD cartridge provides the mechanical mounting required to fit a single 15mm SSD into the 3.5-inch HDD bay. The SSD docks directly into the SFF-8639 on the DPB and no electrical interposer is required. The drawings below show the current design for the single SSD cartridge.
Figure 23 Lightning Single SSD Cartridge, 15 mm
8.4 Single SSD Cartridge (7 mm)
The single SSD cartridge provides the mechanical mounting required to fit a single 7mm SSD into the 3.5-inch HDD bay. The SSD docks directly into the SFF-8639 on the DPB and no electrical interposer is required. The drawings below show the current design for the single SSD cartridge.
38 February 5th, 2016
Figure 24 Lightning Single SSD Cartridge, 7 mm
8.5 Dual M.2 Module Cartridge
The M.2 cartridge will contain an interposer board that allows the installation of 2x M.2 2280 or 22110 D5+ SSDs. The interposer will also include a temperature sensor that is connected to SMBUS/I2C.
Open Compute Project � Lightning � v0.15
http://opencompute.org 39
Lightning Dual M.2 Cartridge
The diagram below shows the wiring of the interposer for the dual SSD Cartridge.
Figure 25 Lightning Dual SSD Cartridge Interposer Connections
M.222110
M.22280
5
5
4
4
3
3
2
2
1
1
D D
C C
B B
A A
Title
Size Document Number Rev
Date: Sheet of
Lightning_M.2_Adaptor_Card SA
BLOCK DIAGRAM
A3
3 10Monday, January 04, 2016
Wiwynn
H/W R&D Dept. II
8F, 90, Sec.1, Xintai 5th Rd., Xizhi Dist.,New Taipei City 22102, Taiwan (R.O.C.)
Title
Size Document Number Rev
Date: Sheet of
Lightning_M.2_Adaptor_Card SA
BLOCK DIAGRAM
A3
3 10Monday, January 04, 2016
Wiwynn
H/W R&D Dept. II
8F, 90, Sec.1, Xintai 5th Rd., Xizhi Dist.,New Taipei City 22102, Taiwan (R.O.C.)
Title
Size Document Number Rev
Date: Sheet of
Lightning_M.2_Adaptor_Card SA
BLOCK DIAGRAM
A3
3 10Monday, January 04, 2016
Wiwynn
H/W R&D Dept. II
8F, 90, Sec.1, Xintai 5th Rd., Xizhi Dist.,New Taipei City 22102, Taiwan (R.O.C.)
40 February 5th, 2016
8.6 PCIe Expansion Board PCB Thickness
To ensure proper fit of the PCIe expansion board in the PCIe Drive Plane Board's connector, the PEB must be 62 mils thick.
8.7 Silk Screen
The silk screen shall be white in color and include labels for the components listed below. Additional items required on the silk screen will be available during product development phase.
• All SSDs • Mini-SAS HD connectors • Fan headers • Debug headers • PCIe Switches (numbered if more than one per board) • All LEDs • All buttons (push button or slide switch)
8.8 PCB Color
Different PCB colors shall be used to help identify the board revisions. Table 13 below indicates the PCB color to be used for each development revision.
Table 13 PCB Color for Different Revision
Revision PCB Color
EVT Red
DVT Yellow
PVT Green
9 Baseboard Management Controller (BMC)
9.1 Overview
Since the current PCIe switches do not support any form of in-band management functions aside from switch specific functions, a BMC will be required. The BMC will be an ASPEED AST2400 and will leverage the Honey Badger design. The BMC will be accessible from the host server via USB bus from the host CPU subsystem, and also through a PCIe x1 connection. The BMC will also be accessible from the host server(s) via sideband signals (I2C) from the host BMC. Additionally, the BMC FW will be based on OpenBMC and any changes will also be open sourced.
Here is a list of differences from the Honey Badger BMC:
Table 14 Lightning BMC Compared to Honey Badger
Open Compute Project � Lightning � v0.15
http://opencompute.org 41
Item Add: Remove:
1 I2C to PCIe switches SMBUS to NIC
2 I2C to SSDs All LAN and NIC support
3 PCIe switch configuration and reset All micro-server connections and support
4 PCIe cable detection All SAS HBA, re-driver, and switch connections
5 USB connection from Mini-SAS HD connector to BMC
6 PCIe x1 connection from PCIe switch to BMC
7 Support for 1, 2 or 4 hosts (Multiple hosts is Stretch Goal)
8 PCIe switch configuration to enable multiple hosts (Stretch Goal)
9 I2C to PCIe re-drivers (as needed)
9.2 Host to Lightning BMC (Prioritized)
Below three ways are planned for connections between host and Lightning BMC:
1. Connection via USB 2. Connection via PCIe 3. Connection via I2C
9.2.1 Connection via USB
This connection will be the primary mode of communication between the Host and Lightning BMC. A USB connection runs form the Leopard to the USB hub on the Re-timer card which will have USB pins running over the mini SAS HD cable to the Lightning BMC. The connection between the Host OS and Lightning BMC will be using the IPv6 over USB interface. It is bi-directional. This will enable the Host OS to connect to OpenBMC via SSH and RESTful interface.
9.2.2 Connection via PCIe
The Lightning BMC will be connected to the PCIe switch via 1x PCIe lane, then to the host. In this case the BMC will always be end device only. The Host OS will communicate with the Lightning BMC via MCTP (or any other PCIe protocol).
42 February 5th, 2016
9.2.3 Connection via I2C
There are two portions included:
I. BMC to BMC connection
The Lightning BMC will be connected to one or more host BMCs via I2C. The I2C connection is routed through one or more PCIe cables to the host repeater cards and then connected to the host BMC via the PCIe riser cards in Leopard servers. Up to 4x I2C connections (or 4x hosts) may be connected to a single Lightning BMC. The host BMC will communicate with the Lightning BMC via IPMB.
II. Leopard BMC Support
Since Leopard is a generic compute server, the changes to Leopard BMC shall be minimal to support Lightning. Leopard BMC shall support “bridge” function so that applications running on local or remote host can communicate directly with Lightning BMC. This “bridge” function shall be able to receive IPMB commands on in-band and/or out-of-band and send them to Lightning BMC on the given IPMB channel and provide response. This mechanism can leverage the IPMI standard mechanism of “BMC Message Bridging” as specified in Section 6.13 of IPMI Specification (v2.0 Rev1.1).
9.3 Lightning BMC Support
Lightning BMC features:
• Shall provide support for IPMI commands. Some of the example use cases of the IPMI commands are listed below. o FRUID - Provide access to various FRUID devices on PEB, PDPB, FCB, etc. o SDR - Provide sensor data records for various managed sensors. o Sensor - Provide the current value of the requested sensor o BMC reset - reset/reboot BMC. o Get Device ID - to read current firmware version o Chassis command - to identify the system by turning on LED, power off
chassis o Get/set fan table - Use OCP defined FSC commands
• Shall support responding to various IPMI OEM commands needed for the following functionality. While defining new OEM commands, it is suggested to choose OEM Net Function code (0x2E) that uses IANA ID. This will enable OpenBMC to handle variety of platforms from different manufacturers without conflict. o Support BMC reset and BMC FW update o Update PCIe Switch FW o status of the SSDs (only those that are mapped to the specific host) o # of hosts connected o Set switch configuration o Reset for individual PCIe Switch
• Shall monitor all the sensors and log errors in the System Event Log (SEL). • Shall provide thermal management of the system by controlling the fans by
making use of user provided Fan Speed Control configuration.
9.4 BMC Heartbeat
A BMC hearbeat LED circuit will be provided to provide a visual confirmation that the
Open Compute Project � Lightning � v0.15
http://opencompute.org 43
BMC is up and running.
9.5 BMC Watchdog Timer
Firmware shall enable the WDT capability on BMC hardware to make sure critical daemons on OpenBMC firmware are running properly and handle the errors by either restarting the offending application or rebooting BMC.
9.6 BMC UART
The BMC UART will be connected to the debug card connector to enable BMC firmware debug or development.
9.7 Debug Support
An OCP debug header will be provided to provide the BMC UART, reset, and LED codes which can be used as diagnostic codes.
9.8 BMC Firmware Update Approach
BMC firmware can be updated through below approaches (also prioritized):
• Approach 1: Host CPU –> PCIe –> BMC –> SPI Flash. In-band; Doesn’t need BMC to be functional.
• Approach 2: Host CPU –> USB –> BMC –> SPI Flash. In-band; Need BMC to be fully functional.
• Approach 3: Host CPU –> USB –> USB to SPI –> SPI Flash of BMC. In-band; Can be done when the BMC is hung or BMC FW has crashed.
• Approach 4: Host BMC –> I2C –> BMC –> SPI Flash. OOB; Need BMC to be fully functional. (Stretch Goal)
Among above four approaches, only approach 3 needs an extra hardware circuit, as shown in Figure 26 below. A USB hub, USB to SPI translator, and SPI mux need to be provided as data path. The USB2SPI chip also has several GPIO pins, to provide control logics as below:
• SPI Mux Select o Default setting of SPI mux is from BMC to SPI flash. While updating flash
from USB directly to flash, Select signal needs to be asserted. • BMC Reset
o This allows remotely resetting the BMC if the BMC FW is hung or after the BMC FW has been updated or reloaded.
44 February 5th, 2016
Mini-SASHDPortA
SPIMux
SPIFlash
BMC_RST
SEL
USBtoSPIwithGPIO
BMCAST2400
USB2.0USBHub
USBConn
USB2.0
USB2.0
USB1.1
SPI
SPI
Device
Host
Figure 26 Lightning BMC Firmware Update Approach
9.9 Multiple Hosts
If multiple hosts are connected to the same Lightning BMC, it will respond differently to some commands (shared infrastructure vs. SSDs). The Lightning BMC will maintain the PCIe switch configuration and SSD to host mapping. For read only commands related to shared infrastructure, the BMC will respond with the same data to all hosts. If multiple hosts send a command that impacts the shared infrastructure (e.g. update firmware or fan tables), the BMC will only process the first command received and will ignore subsequent commands until 2 seconds after the first command has completed.
Since the SSD to host mapping changes depending on the number of hosts connected, the BMC must only allow the host BMC to see or control SSDs that apply to the current map. This mapping is defined by the switch configuration.
The current map can only be changed by the host connected to uplink A and must be saved in a non-volatile memory to ensure that a BMC reset will not lose the current configuration.
9.10 Error Code display on Debug Card
Same Knox or Honey Badger storage sub-system, table 15 below shows the overview of proposed error code to be displayed on debug card when it’s plugged in. For details of each specific error code, please refer to Chapter 15, Appendix B.
Table 15 Proposed Error Code for Lightning
Error code Error Description
Open Compute Project � Lightning � v0.15
http://opencompute.org 45
00 No Error 01-02 Critical Crash - Switch 03-06 Critical Crash – I2C Bus 07-10 Reserved 11-12 Critical Fan Fault 23-30 Reserved 31-42 Temperature Sensor Critical 43-44 Reserved 45-48 Voltage and Current Sensor Critical 49 Reserved 50-64 SSD SMART Temperature Critical 65-66 Switch Internal Temperature Critical 67-69 Reserved 70-84 SSD Fault 85-89 Reserved 90-92 PCIe Uplink Error 93-94 Tray Pulled-out 95-99 Reserved
10 High Level System Consideration This chapter describes the high-level system consideration when connecting Lightning to host(s).
10.1 Supported Servers
The first server to support the Lightning system will be Leopard populated with Broadwell-EP CPUs. Additional server support may be added in the future.
10.2 PCIe Re-timer Card
This section covered the key features and requirements of the PCIe re-timer card that is installed in the host server. Figure 27 shows the block diagram:
46 February 5th, 2016
PCIeRe-timer
PCIex16GoldenFinger
PCIeClockBuffer
Mini-SASHD1
Mini-SASHD2
Mini-SASHD3
Mini-SASHD4
I2CMux
VoltageRegulators
12V,3.3V,etc.
3.3V
1.8V
1.0V
I2C
PCIeReset
PCIeClock
PCIex16
Mini-USBConn
USB
USBHub
PCIeClock
4PCIex4
Figure 27 PCIe Re-timer Card Block Diagram
Below sections list the highlights.
10.2.1 Card form factor
The design should adhere to the PCIe CEM Specification:
• Half height, half length • Or as known ask low profile card • Dimension as 167.65 mm x 68.9 mm x 14.47 mm
10.2.2 Card Requirements of Configuration
Key feature lists:
• Use Merge [2:0] to switch between different modes: 1 x16, 2 x8, or 4 x4. • Three pin jumper is needed and only configured at MFG.
10.2.3 PCIe Re-timer
The PCIe add-in Card used to connect to the host uses a PCIe re-timer for all 16 lanes.
The recommended device is an IDT 89HT0832PZCHLG8 device.
Key feature lists:
• Support for 1x x16, 2x x8, or 4x x4 configurations • Ability to capture basic eyes • Ability to adjust Rx/Tx settings • Low power
Open Compute Project � Lightning � v0.15
http://opencompute.org 47
10.2.4 Card power consumption
Estimated power consumption of PCIe re-timer card is around 9W.
10.2.5 Cable Connector Pin-out
For Mini-SAS cable connector pin-out, please refer to Section 5.5.2, and Table 2.
10.2.6 Golden Finger Pin-out
For golden finger pin-out that connects to PCIe riser card then to the host server, please refer to Table 16 below:
Table 16 Golden Finger to PCIe Riser Card
New/Changed
Pin Side B Golden Finger Side A Golden Finger
# Name Name
1 +12v PRSNT#1
2 +12v +12v
3 +12v +12v
4 GND GND
5 SMCLK NC
6 SMDAT NC
7 GND NC
8 +3.3v NC
9 NC +3.3v
10 P3V3_AUX +3.3v
11 WAKE# PERST#
Mechanical Key
12 SMB_ALERT_N GND
13 GND REFCLK1+
14 PETp(0) REFCLK1-
15 PETn(0) GND
16 GND PERp(0)
17 NC PERn(0)
18 GND GND
19 PETp(1) RSVD
20 PETn(1) GND
21 GND PERp(1)
22 GND PERn(1)
48 February 5th, 2016
23 PETp(2) GND
24 PETn(2) GND
25 GND PERp(2)
26 GND PERn(2)
27 PETp(3) GND
28 PETn(3) GND
29 GND PERp(3)
30 PWR_BRK# PERn(3)
31 PRSNT#2-3 GND
32 GND USB1.1+P
33 PETp(4) USB1.1+N
34 PETn(4) GND
35 GND PERp(4)
36 GND PERn(4)
37 PETp(5) GND
38 PETn(5) GND
39 GND PERp(5)
40 GND PERn(5)
41 PETp(6) GND
42 PETn(6) GND
43 GND PERp(6)
44 GND PERn(6)
45 PETp(7) GND
46 PETn(7) GND
47 GND PERp(7)
48 PRSNT#2-4 PERn(7)
49 GND GND
50 PETp(8) RSVD
51 PETn(8) GND
52 GND PERp(8)
53 GND PERn(8)
54 PETp(9) GND
55 PETn(9) GND
Open Compute Project � Lightning � v0.15
http://opencompute.org 49
56 GND PERp(9)
57 GND PERn(9)
58 PETp(10) GND
59 PETn(10) GND
60 GND PERp(10)
61 GND PERn(10)
62 PETp(11) GND
63 PETn(11) GND
64 GND PERp(11)
65 GND PERn(11)
66 PETp(12) GND
67 PETn(12) GND
68 GND PERp(12)
69 GND PERn(12)
70 PETp(13) GND
71 PETn(13) GND
72 GND PERp(13)
73 GND PERn(13)
74 PETp(14) GND
75 PETn(14) GND
76 GND PERp(14)
77 GND PERn(14)
78 PETp(15) GND
79 PETn(15) GND
80 GND PERp(15)
81 PRSNT#2-6 PERn(15)
82 RSVD GND
10.3 PCIe Cables
10.3.1 Cabling Requirements
The host to PEB cables will be custom mini-SAS HD cables. The cables can be either x4 or x8 cables. Since the connectors are all x4, the x8 cables will actually bundle two x4 connectors together. The cables have a customized pinout to enable routing a full complement of sideband signals. The cables will be available in 1.5M and 2M lengths.
The following table shows the part numbers:
50 February 5th, 2016
Table 17 Lightning PCIe Cable Solution
Supplier Length Type Part Number
Amphenol 1.5M X8 NEETCT-F402
Amphenol 2M X8 NEETCT-F403
Molex 1.5M X8 1110752201
Molex 2M X8 1110752202
3M 1.5M X8
3M 2M X8
10.3.2 Cable Pinout and Connection
All cables have the same pinout and connections regardless whether they are x4 or x8 cables. The cables will have the following pinout and connections:
For Mini-SAS cable connector pin-out, please refer to Section 5.5.2, Table 2.
11 Environmental Requirements and Reliability
11.1 Environmental Requirements
The Honey Badger compute module should support the Lightning system to meet the following environmental requirements:
• Gaseous contamination: Severity Level G1 per ANSI/ISA 71.04-1985 • Ambient operating temperature range for system without HDD: -5°C to +45°C • Ambient operating temperature range for system with HDD: +5°C to +35°C • Storage temperature range: -40°C to +70°C (long-term storage) • Transportation temperature range: -55°C to +85°C (short-term storage) • Operating and storage relative humidity: 10% to 90% (non-condensing) • Operating altitude with no de-rating to 1,000m (3,300 feet)
11.2 Vibration and Shock
The Honey Badger compute module should support the Lightning system to meet shock and vibration requirements according to the following IEC specifications: IEC78-2-(*) & IEC721-3-(*) Standard & Levels. The testing requirements are listed in Table 18.
Table 18 Vibration and Shock Requirements
Operating Non-Operating
Vibration 0.4g acceleration, 5 to 500 Hz, 10 1g acceleration, 5 to 500 Hz, 10
Open Compute Project � Lightning � v0.15
http://opencompute.org 51
sweeps at 1 octave/minute per each of the three axes (one sweep is 5 to 500 to 5 Hz)
sweeps at 1 octave/minute per each of the three axes (one sweep is 5 to 500 to 5 Hz)
Shock 6g, half-sine 11mS, 5 shocks per each of the three axes
12g, half-sine 11mS, 10 shocks per each of the three axes
Shock and vibration tests need to take place while the Honey Badger is installed in a Lightning and an Open Rack for different type of shipping conditions.
11.3 Mean Time Between Failures (MTBF) Requirements
The system shall have a minimum calculated MTBF of 300,000 hours at 95% confidence level at 25°C ambient temperature while running at full load.
The system shall meet a demonstrated MTBF of minimum 300,000 hours at 95% confidence level prior to the mass production ramp.
The system shall have a minimum service life of 3 years (24 hours/day, full load, at 35°C ambient temperature).
11.4 Regulations
The vendor needs to provide CB reports of Honey Badger (both Panther+ card and Baseboard) at the component level. Facebook will need these documents to have rack level CE.
12 Labels and Markings
12.1 PCBA Labels and Markings
All Honey Badger PCBAs shall include the following labels on the component side of the boards. The labels shall not be placed in such a way that may cause them to disrupt the functionality or the airflow path of the system.
Table 19 PCBA Label Requirements
Description Type Barcode Required?
Safety markings Silkscreen No
Vendor P/N, S/N, REV (revision would increment for any approved changes)
Adhesive label Yes
Vendor logo, name & country of origin Silkscreen No
PCB vendor logo, name Silkscreen No
Facebook P/N Adhesive label Yes
Date code (industry standard: WEEK/YEAR) Adhesive label Yes
DC input ratings Silkscreen No
RoHS compliance Silkscreen No
WEEE symbol: The motherboard will have the crossed out wheeled bin symbol to indicate that it will be taken back by the manufacturer for recycling at the end of
Silkscreen No
52 February 5th, 2016
its useful life. This is defined in the European Union Directive 2002/96/EC of January 27, 2003 on Waste Electrical and Electronic Equipment (WEEE) and any subsequent amendments.
12.2 Chassis Labels and Markings
With Honey Badger, the new Lightning chassis and trays shall carry the following adhesive barcoded labels in visible locations where they can be easily scanned during integration.
Vendor and Facebook will have an agreement for the label locations.
Table 20 Chassis Label Requirements
Description
Vendor P/N, S/N, REV (revision would increment for any approved changes)
Facebook P/N (or OCP customer P/N)
Date code (industry standard: WEEK/YEAR)
The assembly shall be marked “THIS SIDE UP”, “TOP SIDE”, “UP ^” or other approved marking in bright, large characters in a color to be defined by ODM and Facebook (or OCP customer). This printing may be on the PCB itself, or on an installed component such as an air baffle. The label should be clear and easy to read in low light conditions, when viewed from above or below from 2 feet away and at an angle of approximately 60 degrees off horizontal.
13 Prescribed Materials
13.1 Sustainable Materials
Materials and finishes that reduce the life cycle impact of servers should be used where cost and performance are not compromised. This includes the use of non-hexavalent metal finishes, recycled and recyclable base materials and materials made from renewable resources, with associated material certifications.
Facebook identified plastic alternatives including polypropylene plus natural fiber (PP+NF) compounds that meet functionality requirements while reducing cradle to gate environmental impact when compared to PC/ABS. GreenGranF023T is one acceptable alternate material. JPSECO also offers a PP+NF material that is acceptable; the model number will be available at a later date. It is strongly preferred that such alternatives are identified and used. If vendor is unable to use this, or a similar alternate material, vendor will provide a list of materials that were considered and why they were not successfully incorporated.
13.2 Disallowed Components
The following components shall not be used in the design of the motherboard:
• Components disallowed by the European Union's Restriction of Hazardous Substances Directive (RoHS)
Open Compute Project � Lightning � v0.15
http://opencompute.org 53
• Trimmers and/or potentiometers • Dip switches
13.3 Capacitors and Inductors
The following limitations shall be applied to the use of capacitors:
• Only aluminum organic polymer capacitors from high-quality manufacturers are used; they must be rated 105°C
• All capacitors have a predicted life of at least 50,000 hours at 45°C inlet air temperature, under worst conditions
• Tantalum capacitors are forbidden • SMT ceramic capacitors with case size > 1206 are forbidden (size 1206 still
allowed when installed far from PCB edge, and with a correct orientation that minimizes risks of cracks)
• Ceramics material for SMT capacitors must be X7R or better material (COG or NP0 type should be used in critical portions of the design)
Only SMT inductors may be used. The use of through-hole inductors is disallowed.
13.4 Component De-Rating
For inductors, capacitors, and FETs, de-rating analysis should be based on at least 20% de-rating.
14 Appendix A: Interconnect Pin Definitions Below are the full interconnected pin definitions between PCIe Expansion Board and drive plane board. The connectors are placed such that Pin1 is towards the right side of the chassis when viewed from the front.
14.1 Pin definitions on PCIe Expansion Board
Below listed are the pin definitions on PCIe Expansion Board, to drive plane board.
(Subject to change)
14.1.1 Pin Definition from PEB to PDPB – I
Table 21 Pin Definition from PEB to PDPB – I
GND A1 B1 GND
PE_100M_CLK2_p CLK A2 B2 CLK PE_100M_CLK3_pPE_100M_CLK2_n CLK A3 B3 CLK PE_100M_CLK3_n
Driv
e1
GND A4 B4 GND
TX A5 B5 PERST SSD10_PERST_N
TX A6 B6 GND
GND A7 B7 RX
GND A8 B8 RX
54 February 5th, 2016
TX A9 B9 GND
TX A10 B10 GND
GND A11 B11 RX
GND A12 B12 RX
TX A13 B13 GND
TX A14 B14 GND
GND A15 B15 RX
GND A16 B16 RX
TX A17 B17 GND
TX A18 B18 GND
GND A19 B19 RX
ForSSD1
IFDET# A20 B20 RXForSSD1 ATNLED A21 B21 GNDForSSD1 PWREN A22 B22 PWREN ForSSD10
Driv
e10
GND A23 B23 ATNLED ForSSD10
TX A24 B24 IFDET# ForSSD10
TX A25 B25 GND
GND A26 B26 RX
GND A27 B27 RX
TX A28 B28 GND
TX A29 B29 GND
GND A30 B30 RX
GND A31 B31 RX
TX A32 B32 GND
TX A33 B33 GND
GND A34 B34 RX
GND A35 B35 RX
TX A36 B36 GND
TX A37 B37 GND
GND A38 B38 RX
Driv
e0
GND A39 B39 RX
TX A40 B40 GND
TX A41 B41 GND
GND A42 B42 RX
SSD1_PERST_N PERST A43 B43 RXSSD0_PERST_N PERST A44 B44 GND
Key Key
PWM_EXP_A Misc A45 B45 GNDPCIe_Mated#_A Misc A46 B46 RX
GND A47 B47 RX
Open Compute Project � Lightning � v0.15
http://opencompute.org 55
TX A48 B48 GND
TX A49 B49 GND
GND A50 B50 RX
GND A51 B51 RX
TX A52 B52 GND
TX A53 B53 GND
GND A54 B54 RX
GND A55 B55 RX
TX A56 B56 GND
TX A57 B57 IFDET# ForSSD5
GND A58 B58 ATNLED ForSSD5
ForSSD0
PWREN A59 B59 PWREN ForSSD5ForSSD0
ATNLED A60 B60 GNDForSSD0
IFDET# A61 B61 RX
Driv
e5
GND A62 B62 RX
TX A63 B63 GND
TX A64 B64 GND
GND A65 B65 RX
GND A66 B66 RX
TX A67 B67 GND
TX A68 B68 GND
GND A69 B69 RX
GND A70 B70 RX
TX A71 B71 GND
TX A72 B72 GND
GND A73 B73 RX
GND A74 B74 RX
TX A75 B75 GND
TX A76 B76 CLK PE_100M_CLK1_p
GND A77 B77 CLK PE_100M_CLK1_n
PE_100M_CLK4_p
CLK A78 B78 GNDPE_100M_CLK4_n
CLK A79 B79 I2C I2C_8_SCL
GND A80 B80 I2C I2C_8_SDA
I2C_B_SCL
I2C A81 B81 GNDI2C_B_SDA
I2C A82 B82 PERST SSD5_PERST_N
GND A83 B83 Misc PEER_1A&2A_HB
I2C_9_SCL
I2C A84 B84 Misc SELF_1A&2A_HBI2C_9_SDA
I2C A85 B85 Misc FCB_HW_REVISION
GND A86 B86 Misc DPB_HW_REVISION
I2C_C_SCL
I2C A87 B87 Misc SELF_TRAY_PRESENTI2C_C_SCL
I2C A88 B88 Misc PEER_TRAY_PRESENT
56 February 5th, 2016
GND A89 B89 GND
GND A90 B90 GND
GND A91 B91 GND
GND A92 B92 GND
GND A93 B93 GND
GND A94 B94 GND
P12V A95 B95 P12V
P12V A96 B96 P12V
P12V A97 B97 P12V
P12V A98 B98 P12V
P12V A99 B99 P12V
P12V A100 B100 P12V
14.1.2 Pin Definition from PEB to PDPB – II
Table 22 Pin Definition from PEB to PDPB – II
Driv
e4
GND A1 B1 PERST SSD11_PERST_N TX A2 B2 PERST SSD2_PERST_N
TX A3 B3 GND GND A4 B4 RX GND A5 B5 RX TX A6 B6 GND TX A7 B7 GND GND A8 B8 RX SSD3_PERST_N PERST A9 B9 RX
ForSSD3 PWREN A10 B10 GND ForSSD3 ATNLED A11 B11 PESRT SSD12_PERST_NForSSD3 IFDET# A12 B12 PWREN ForSSD12
Driv
e3
GND A13 B13 ATNLED ForSSD12 TX A14 B14 IFDET# ForSSD12 TX A15 B15 GND GND A16 B16 RX GND A17 B17 RX TX A18 B18 GND TX A19 B19 GND GND A20 B20 RX GND A21 B21 RX TX A22 B22 GND TX A23 B23 GND
Open Compute Project � Lightning � v0.15
http://opencompute.org 57
GND A24 B24 RX GND A25 B25 RX TX A26 B26 GND TX A27 B27 GND GND A28 B28 RX
Driv
e7
GND A29 B29 RX TX A30 B30 GND TX A31 B31 GND GND A32 B32 RX GND A33 B33 RX TX A34 B34 GND TX A35 B35 GND GND A36 B36 RX GND A37 B37 RX TX A38 B38 GND TX A39 B39 GND GND A40 B40 RX GND A41 B41 RX TX A42 B42 GND TX A43 B43 PERST SSD7_PERST_N
GND A44 B44 PWREN ForSSD7
Key Key ForSSD7 ATNLED A45 B45 GND ForSSD7 IFDET# A46 B46 RX
Driv
e2
GND A47 B47 RX TX A48 B48 GND TX A49 B49 GND GND A50 B50 RX GND A51 B51 RX TX A52 B52 GND TX A53 B53 GND GND A54 B54 RX GND A55 B55 RX TX A56 B56 GND TX A57 B57 GND GND A58 B58 RX GND A59 B59 RX TX A60 B60 GND TX A61 B61 GND GND A62 B62 RX
ForSSD2 IFDET# A63 B63 RX
58 February 5th, 2016
ForSSD2 ATNLED A64 B64 GND ForSSD2 PWREN A65 B65 PWREN ForSSD11
Driv
e11
GND A66 B66 ATNLED ForSSD11 TX A67 B67 IFDET# ForSSD11 TX A68 B68 GND GND A69 B69 RX GND A70 B70 RX TX A71 B71 GND TX A72 B72 GND GND A73 B73 RX GND A74 B74 RX TX A75 B75 GND TX A76 B76 GND GND A77 B77 RX GND A78 B78 RX TX A79 B79 GND TX A80 B80 GND GND A81 B81 RX
Driv
e6
GND A82 B82 RX TX A83 B83 GND TX A84 B84 GND GND A85 B85 RX GND A86 B86 RX TX A87 B87 GND TX A88 B88 GND GND A89 B89 RX GND A90 B90 RX TX A91 B91 GND TX A92 B92 GND GND A93 B93 RX GND A94 B94 RX TX A95 B95 GND TX A96 B96 GND GND A97 B97 RX
ForSSD6
IFDET# A98 B98 RX ForSSD6
ATNLED A99 B99 GND
ForSSD6
PWREN A100 B100 PERST SSD6_PERST_N
Open Compute Project � Lightning � v0.15
http://opencompute.org 59
14.1.3 Pin Definition from PEB to PDPB – III
Table 23 Pin Definition from PEB to PDPB – III
Driv
e14
GND A1 B1 ATNLED ForSSD14
TX A2 B2 PWREN ForSSD14
TX A3 B3 GND
GND A4 B4 RX
GND A5 B5 RX
TX A6 B6 GND
TX A7 B7 GND
GND A8 B8 RX
GND A9 B9 RX
TX A10 B10 GND
TX A11 B11 GND
GND A12 B12 RX
GND A13 B13 RX
TX A14 B14 GND
TX A15 B15 GND
GND A16 B16 RX
ForSSD14 IFDET# A17 B17 RX ForSSD9
PWREN A18 B18 GND
ForSSD9
ATNLED A19 B19 GND ForSSD9
IFDET# A20 B20 RX
Driv
e13
GND A21 B21 RX
TX A22 B22 GND
TX A23 B23 GND
GND A24 B24 RX
GND A25 B25 RX
TX A26 B26 GND
TX A27 B27 GND
GND A28 B28 RX
GND A29 B29 RX
TX A30 B30 GND
TX A31 B31 GND
GND A32 B32 RX
GND A33 B33 RX
TX A34 B34 GND
TX A35 B35 GND
GND A36 B36 RX
Dri ve
9 GND A37 B37 RX
60 February 5th, 2016
TX A38 B38 GND
TX A39 B39 GND
GND A40 B40 RX
GND A41 B41 RX
TX A42 B42 GND
TX A43 B43 PERST SSD14_PERST_N
GND A44 B44 PERST SSD4_PERST_N
Key Key
SSD9_PERST_N PERST A45 B45 GND SSD13_PERST_N PERST A46 B46 RX
GND A47 B47 RX
TX A48 B48 GND
TX A49 B49 GND
GND A50 B50 RX
GND A51 B51 RX
TX A52 B52 GND
TX A53 B53 IFDET# ForSSD4
GND A54 B54 ATNLED ForSSD4
ForSSD13 PWREN A55 B55 PWREN ForSSD4
ForSSD13 ATNLED A56 B56 GND ForSSD13 IFDET# A57 B57 RX
GND A58 B58 RX
Driv
e8
TX A59 B59 GND
TX A60 B60 GND
GND A61 B61 RX
GND A62 B62 RX
TX A63 B63 GND
TX A64 B64 GND
GND A65 B65 RX
GND A66 B66 RX
TX A67 B67 GND
TX A68 B68 GND
GND A69 B69 RX
GND A70 B70 RX
TX A71 B71 GND
TX A72 B72 GND
GND A73 B73 RX
GND A74 B74 RX
Driv
e12
TX A75 B75 GND
TX A76 B76 GND
Open Compute Project � Lightning � v0.15
http://opencompute.org 61
GND A77 B77 RX
GND A78 B78 RX
TX A79 B79 GND
TX A80 B80 GND
GND A81 B81 RX
GND A82 B82 RX
TX A83 B83 GND
TX A84 B84 GND
GND A85 B85 RX
GND A86 B86 RX
TX A87 B87 GND
TX A88 B88 IFDET# ForSSD8
GND A89 B89 ATNLED ForSSD8
GND A90 B90 PWREN ForSSD8
Driv
e4
TX A91 B91 PERST SSD8_PERST_N
TX A92 B92 GND
GND A93 B93 RX
GND A94 B94 RX
TX A95 B95 GND
TX A96 B96 GND
GND A97 B97 RX
Tray_ID Misc A98 B98 RX SD_LATCH_RELEASE
Misc A99 B99 GND
PCIe_Mated#_B
Misc A100 B100 Misc I2C_MUX_RESET_PEB
15 Appendix B: Error Code Definition Below listed is the full definition of Lightning Error Code. It will be displayed on the Debug Card as well as stored in system event log.
Table 24 Error Code Definition
Error Code Description
00 No error
01 Switch A fault
02 Switch B fault
03 I2C bus A crash
04 I2C bus B crash
05 I2C bus C crash
06 I2C bus D crash
62 February 5th, 2016
07 Reserved
08 Reserved
09 Reserved
10 Reserved
11 Fan 1 front fault
12 Fan 1 rear fault
13 Fan 2 front fault
14 Fan 2 rear fault
15 Fan 3 front fault
16 Fan 3 rear fault
17 Fan 4 front fault
18 Fan 4 rear fault
19 Fan 5 front fault
20 Fan 5 rear fault
21 Fan 6 front fault
22 Fan 6 rear fault
23 Reserved
24 Reserved
25 Reserved
26 Reserved
27 Reserved
28 Reserved
29 Reserved
30 Reserved
31 Drive board Temp Sensor 1 critical
32 Drive board Temp Sensor 2 critical
33 Drive board Temp Sensor 3 critical
34 Drive board Temp Sensor 4 critical
35 Switch Temp Sensor A critical
36 Switch Temp Sensor B critical
37 Ambient Temp Sensor A1 critical
38 Ambient Temp Sensor A2 critical
39 Ambient Temp Sensor B1 critical
40 Ambient Temp Sensor B2 critical
41 BJT Temp Sensor 1 critical
42 BJT Temp Sensor 2 critical
43 Reserved
44 Reserved
Open Compute Project � Lightning � v0.15
http://opencompute.org 63
45 Switch voltage sensor critical
46 Drive plane board voltage sensor critical
47 Fan Control Board voltage sensor critical
48 Fan Control Board current sensor critical
49 Reserved
50 SSDC0 SMART Temp critical
51 SSDC1 SMART Temp critical
52 SSDC2 SMART Temp critical
53 SSDC3 SMART Temp critical
54 SSDC4 SMART Temp critical
55 SSDC5 SMART Temp critical
56 SSDC6 SMART Temp critical
57 SSDC7 SMART Temp critical
58 SSDC8 SMART Temp critical
59 SSDC9 SMART Temp critical
60 SSDC10 SMART Temp critical
61 SSDC11 SMART Temp critical
62 SSDC12 SMART Temp critical
63 SSDC13 SMART Temp critical
64 SSDC14 SMART Temp critical
65 Switch A Internal Temp critical
66 Switch B Internal Temp critical
67 Reserved
68 Reserved
69 Reserved
70 SSDC0 fault
71 SSDC1 fault
72 SSDC2 fault
73 SSDC3 fault
74 SSDC4 fault
75 SSDC5 fault
76 SSDC6 fault
77 SSDC7 fault
78 SSDC8 fault
79 SSDC9 fault
80 SSDC10 fault
81 SSDC11 fault
82 SSDC12 fault
83 SSDC13 fault
64 February 5th, 2016
84 SSDC14 fault
85 Reserved
86 Reserved
87 Reserved
88 Reserved
89 Reserved
90 PCIe Uplink 1 Error
91 PCIe Uplink 2 Error
92 Reserved
93 Self Tray Pulled-out
94 Peer Tray Pulled-out
95 Reserved
96 Reserved
97 Reserved
98 Reserved
99 Firmware and hardware mismatch
16 Revision History
Author Description Revision Number
Date
Chris Petersen / Mike Yan
Initial release of OCP version 0.15 2/05/16
Recommended