KeyStone Training Network Coprocessor (NETCP) Packet Accelerator (PA)

Embed Size (px)

Citation preview

  • Slide 1
  • KeyStone Training Network Coprocessor (NETCP) Packet Accelerator (PA)
  • Slide 2
  • Agenda NETCP Overview PA Overview PA Firmware PA Low Level Driver (LLD) Programming Example
  • Slide 3
  • NETCP Overview PA Overview PA Firmware PA Low Level Driver (LLD) Programming Example
  • Slide 4
  • Network Coprocessor (NETCP)
  • Slide 5
  • Network Coprocessor consists of the following modules: Packet DMA (PKTDMA) Controller Security Accelerator (SA) Packet Accelerator (PA) Gigabit Ethernet Switch Subsystem Distributed Interrupt Controller (INTD) What is NETCP?
  • Slide 6
  • NETCP Purpose Motivation behind NETCP: Use hardware accelerators to do L2, L3, and L4 processing and encryption that was previously required to be done in software Goals for both PA and SA: Offload DSP processing power Improve system integration Allow cost savings at system level Expand DSP usability w/in current products Allow DSP usage in new product areas Security Key applications: IPSec tunnel endpoint (e.g. LTE eNB,...) Secure RTP (SRTP) between gateways Air interface (3GPP, Wimax) security processing
  • Slide 7
  • Packet Accelerator: Overview NETCP Overview PA Overview PA Firmware PA Low Level Driver (LLD) Programming Example
  • Slide 8
  • PA Features Overview Packet accelerator saves cycles from host DSP cores. 1 Gbps wire-speed throughput at 1.5 Mpps Option of single IP address for multi-core device; Multicore device internal architecture is abstracted UDP (and TCP) Checksum and selected CRCs for proprietary header formats; Verification on ingress and generation on egress L2 Support Ethernet: Ethertype and VLAN MPLS L3/L4 Support IPv4/6 and UDP port based routing Raw Ethernet or IPv4/6 and Proprietary UDP like protocol support QOS support (in conjunction with PKTDMA) Per channel / flow to individual queue towards host DSPs Traffic shaping Access to the Security Accelerator; IPSec ESP and AH tunnel, SRTP Multicast to multiple queues; For example, Ethernet broadcast copied and pushed to 1-8 queues IEEE 1588 timestamps and configurable generic timestamps
  • Slide 9
  • PA: Functional Overview L2 Classify Engine Used for matching L2 headers Example headers: MAC, VLAN, LLC snap L3 Classify Engine 0 Used for matching L3 headers Example headers: IPv4, IPv6, Custom L3 Also uses Multicore Navigator to match ESP headers and direct packets to SA L3 Classify Engine 1 Typically used for matching L3 headers in IPsec tunnels Example headers: IPv4, IPv6, Custom L3 L4 Classify Engine Used for matching L4 Headers Example headers: UDP, TCP, Custom L4 Modify/Multi-Route Engines Used for Modification, Multi-route, and Statistics requests Modification Example: generate IP or UDP header checksums Multi-route Example: route a packet to multiple queues PA Statistics Block Stores statistics for packets processed by the classify engines Statistics requests typically handled by Modify/Multi-route engines Packet ID Manager Assigns packet ID to packets
  • Slide 10
  • SGMII and Ethernet Switch 3-Port Ethernet Switch CPSGMII MDIO 0 INTS Switch Status INTS SGMII 1 SGMII 0 CPMDIO SERDES Gigabit Ethernet Switch Subsystem External PHY 32-bit TeraNet Configuration Bus
  • Slide 11
  • Packet Accelerator: Firmware NETCP Overview PA Overview PA Firmware PA Low Level Driver (LLD) Programming Example
  • Slide 12
  • PA: Hardware and Firmware Before using the PA engines, firmware images must be loaded into internal RAM to enable the PDSP to make lookup and routing decisions. One L2 Classify Engine PDSP Pass 1 Lookup Table (LUT1) Timer Classify 1 (c1) firmware image Two L3 Classify Engines PDSP Pass 1 Lookup Table (LUT1) Timer Classify 1 (c1) firmware image One L4 Classify Engine PDSP Pass 2 Lookup Table (LUT2) Timer Classify 2 (c2) firmware image Two Modify/Multi-Route Engines PDSP Timer Modify (m) firmware image
  • Slide 13
  • Packet Accelerator: LLD NETCP Overview PA Overview PA Firmware PA Low Level Driver (LLD) Programming Example
  • Slide 14
  • PA LLD Overview PA LLD provides an abstraction layer between the application and the PA. It translates packet headers and routing requirements into configuration information that is used by the PA firmware. PA LLD provides the command/response interface for PA configurations: LUT1 LUT2 CRC generation Multi-route NOTE: The most general configuration must entered into the PDSPs before any overlapping, more specific configuration The PA LLD also handles linking together entries in separate lookup tables. For example, linking an entry in an L2 classify lookup table to an entry in an L3 classify lookup table. PA LLD does not provide transport layer; This is handled by the Multicore Navigator. API calls are non-blocking. PA LLD reference within MCSDK: pa/docs/paDocs.chm
  • Slide 15
  • PA LLD Functional Diagram Benefits Abstracts the operation of the PA from the application OS-independent Multi-instance for multicore NOTE: PA LLD runs on the host DSP and is external in the PA.
  • Slide 16
  • PA LLD API: System paReturn_t Pa_getBufferReqPa_getBufferReq (paSizeInfo_t *sizeCfg, int sizes[], int aligns[])paSizeInfo_t Pa_getBufferReq returns the memory requirements for the PA driver. paReturn_t Pa_createPa_create (paConfig_t *cfg, void *bases[], Pa_Handle *pHandle)paConfig_tPa_Handle Pa_create creates the PA driver instance. paReturn_t Pa_closePa_close (Pa_Handle handle, void *bases[])Pa_Handle Pa_close decativates the PA driver instance. paReturn_t Pa_requestStatsPa_requestStats (Pa_Handle iHandle, uint16_t doClear, paCmd_t cmd, uint16_t *cmdSize, paCmdReply_t *reply, int *cmdDest)Pa_HandlepaCmd_tpaCmdReply_t Pa_requestStats requests sub-system statistics. paSysStats_tpaSysStats_t * Pa_formatStatsReplyPa_formatStatsReply (Pa_Handle handle, paCmd_t cmd)Pa_HandlepaCmd_t Pa_formatStatsReply formats a stats request from the PA. paSSstate_t Pa_resetControlPa_resetControl (paSSstate_t newState)paSSstate_t Pa_resetControl controls the reset state of the Sub-system. paReturn_t Pa_downloadImagePa_downloadImage (int modId, void *image, int sizeBytes) Pa_downloadImage downloads a PDSP image to a sub-system with the packet processing modules in reset.
  • Slide 17
  • PA LLD API: Configuration paReturn_t Pa_addMacPa_addMac (Pa_Handle iHandle, paEthInfo_t *ethInfo, paRouteInfo_t *routeInfo, paRouteInfo_t *nextRtFail, paHandleL2L3_t *handle, paCmd_t cmd, uint16_t *cmdSize, paCmdReply_t *reply, int *cmdDest)Pa_HandlepaEthInfo_tpaRouteInfo_t paHandleL2L3_tpaCmd_tpaCmdReply_t Pa_addMac adds a mac address to the L2 table. paReturn_t Pa_delHandlePa_delHandle (Pa_Handle iHandle, paHandleL2L3_t handle, paCmd_t cmd, uint16_t *cmdSize, paCmdReply_t *reply, int *cmdDest)Pa_HandlepaHandleL2L3_tpaCmd_tpaCmdReply_t Pa_delHandle deletes a MAC or IP handle. paReturn_t Pa_delL4HandlePa_delL4Handle (Pa_Handle iHandle, paHandleL4_t handle, paCmd_t cmd, uint16_t *cmdSize, paCmdReply_t *reply, int *cmdDest)Pa_HandlepaHandleL4_tpaCmd_tpaCmdReply_t Pa_delL4Handle deletes a TCP or UDP handle. paReturn_t Pa_addIpPa_addIp (Pa_Handle iHandle, paIpInfo_t *ipInfo, paHandleL2L3_t prevLink, paRouteInfo_t *routeInfo, paRouteInfo_t *nextRtFail, paHandleL2L3_t *retHandle, paCmd_t cmd, uint16_t *cmdSize, paCmdReply_t *reply, int *cmdDest)Pa_HandlepaIpInfo_tpaHandleL2L3_tpaRouteInfo_t paHandleL2L3_tpaCmd_t paCmdReply_t Pa_addIp adds an IP address to the L3 table. paReturn_t Pa_addPortPa_addPort (Pa_Handle iHandle, uint16_t destPort, paHandleL2L3_t linkHandle, paRouteInfo_t *routeInfo, paHandleL4_t retHandle, paCmd_t cmd, uint16_t *cmdSize, paCmdReply_t *reply, int *cmdDest)Pa_HandlepaHandleL2L3_tpaRouteInfo_tpaHandleL4_tpaCmd_tpaCmdReply_t Pa_addPort adds a destination TCP/UDP port to the L4 table. paReturn_t Pa_forwardResultPa_forwardResult (Pa_Handle iHandle, void *vresult, paHandle_t *retHandle, int *handleType, int *cmdDest)Pa_HandlepaHandle_t Pa_forwardResult examines the reply of the sub-system to a command. paReturn_t Pa_configRouteErrPacketPa_configRouteErrPacket (Pa_Handle iHandle, int nRoute, int *errorTypes, paRouteInfo_t *eRoutes, paCmd_t cmd, uint16_t *cmdSize, paCmdReply_t *reply, int *cmdDest)Pa_Handle paRouteInfo_tpaCmd_tpaCmdReply_t Pa_configRouteErrPacket configures the routing of packets that match error conditions.
  • Slide 18
  • PA LLD API: Custom Configuration paReturn_t Pa_setCustomL3Pa_setCustomL3 (Pa_Handle iHandle, uint16_t customEthertype, uint16_t parseByteOffset, uint8_t byteMasks[pa_NUM_BYTES_CUSTOM_L3], paCmd_t cmd, uint16_t *cmdSize, paCmdReply_t *reply, int *cmdDest)Pa_HandlepaCmd_tpaCmdReply_t Pa_setCustomL3 performs the global configuration for level 3 custom lookups. paReturn_t Pa_addCustomL3Pa_addCustomL3 (Pa_Handle iHandle, uint8_t match[pa_NUM_BYTES_CUSTOM_L3], paRouteInfo_t *routeInfo, paRouteInfo_t *nextRtFail, paHandleL2L3_t prevLink, paHandleL2L3_t *retHandle, int nextHdrType, uint16_t nextOffset, paCmd_t cmd, uint16_t *cmdSize, paCmdReply_t *reply, int *cmdDest)Pa_HandlepaRouteInfo_t paHandleL2L3_t paCmd_tpaCmdReply_t Pa_AddCustomL3 adds a custom lookup entry to the lookup tables. paReturn_t Pa_setCustomL4Pa_setCustomL4 (Pa_Handle iHandle, uint16_t handleLink, uint16_t udpCustomPort, uint16_t byteOffsets[pa_NUM_BYTES_CUSTOM_L4], uint8_t byteMasks[pa_NUM_BYTES_CUSTOM_L4], paCmd_t cmd, uint16_t *cmdSize, paCmdReply_t *reply, int *cmdDest)Pa_HandlepaCmd_tpaCmdReply_t Pa_setCustomL4 performs the global configuration for level 4 custom lookups. paReturn_t Pa_addCustomL4Pa_addCustomL4 (Pa_Handle iHandle, paHandleL2L3_t prevLink, uint8_t match[pa_NUM_BYTES_CUSTOM_L4], paRouteInfo_t *routeInfo, paHandleL4_t retHandle, paCmd_t cmd, uint16_t *cmdSize, paCmdReply_t *reply, int *cmdDest)Pa_HandlepaHandleL2L3_t paRouteInfo_tpaHandleL4_tpaCmd_tpaCmdReply_t Pa_addCustomL4 adds a custom lookup to the lookup tables.
  • Slide 19
  • PA LLD API: Utility Functions paReturn_t Pa_formatTxRoutePa_formatTxRoute (paTxChksum_t *chk0, paTxChksum_t *chk1, paRouteInfo_t *route, void *cmdBuffer, int *cmdSize)paTxChksum_t paRouteInfo_t Pa_formatTxRoute formats the commands to add checksums and route a Tx packet. paReturn_t Pa_formatRoutePatchPa_formatRoutePatch (paRouteInfo_t *route, paPatchInfo_t *patch, void *cmdBuffer, int *cmdSize)paRouteInfo_tpaPatchInfo_t Pa_formatRoutePatch formats the commands to route a packet and blind patch.
  • Slide 20
  • LLD HTML Documentation Show example from the HTML file: Pa_addMac Pa_configExceptionRoute
  • Slide 21
  • Download the Firmware Int paDownloadFirmware (void) { Int i; Pa_resetControl (paInst, pa_STATE_RESET); /* PDPSs 0-2 use image c1 */ for (i = 0; i < 3; i++) Pa_downloadImage (paInst, i, (Ptr)c1, c1Size); /* PDSP 3 uses image c2 */ Pa_downloadImage (paInst, 3, (Ptr)c2, c2Size); /* PDSPs 4-5 use image m */ for (i = 4; i < 6; i++) Pa_downloadImage (paInst, i, (Ptr)m, mSize); Pa_resetControl (paInst, pa_STATE_ENABLE); return (0); }
  • Slide 22
  • PA LLD: Programming Example NETCP Overview PA Overview PA Firmware PA Low Level Driver (LLD) Programming Example
  • Slide 23
  • Multicore Navigator PKTDMA QMSS CorePac NETCP PA LLD PKTDMA PA SA Step 1: Set up memory: Pa_getBufferReq() Pa_create() Step 2: Initialize PA and load the firmware: Pa_resetControl(DISABLE) Pa_downloadImage() Pa_resetControl(ENABLE) PA LLD: Basic Configuration This process must be done before configuring PA RX or TX. Basic Configuration
  • Slide 24
  • PA LLD: PA Routing PA LLD provides a routing structure which allows the following parameters to be configured: Destination Flow ID Queue Multi-Route Handle (Index) Software Info 0 Software Info 1 Several possible destinations pa_DEST_HOST pa_DEST_EMAC pa_DEST_SASS0 pa_DEST_SASS1 pa_DEST_DISCARD pa_DEST_CONTINUE_PARSE MAC Routing Example: paRouteInfo_t macRoute; /* Continue parsing -- try to match IP handle*/ macRoute.dest = pa_DEST_CONTINUE_PARSE; macRoute.flowId = 0; macRoute.queue = 0; macRoute.mRouteHandle = -1; macRoute.swInfo0 = 0; /* Dont Care */ macRoute.swInfo1 = 0; /* Dont Care */ Port Routing Example: paRouteInfo_t portRoute; /* Send all matches to the queue specified */ portRoute.dest = pa_DEST_HOST; portRoute.flowId = 5; portRoute.queue = 900; portRoute.mRouteHandle = -1; portRoute.swInfo0 = 0; /* Dont Care */ portRoute.swInfo1 = 0; /* Dont Care */
  • Slide 25
  • Multicore Navigator PKTDMA QMSS CorePac NETCP PA LLD PKTDMA PA SA PA LLD: Rx Configuration Step 1: Call PA LLD configuration or custom configuration API with routing information: Pa_addMac() or Pa_addIp() or Pa_addPort() Step 2: Receive formatted command to be sent to PA. Repeat steps 1-7 to add more MAC addresses, IP addresses, or TCP/UDP ports to PA LUTs before receiving data packets. Step 4: PKTDMA automatically pops descriptor from Tx queue and sends packet to NETCP. After PKTDMA finishes the data transfer, the Tx descriptor is returned to the specified packet completion queue. Configuration Information Step 3: Send command: /* Mark as command packet */ Cppi_setPSData() /* Send command to PA */ QMSS_queuePush()
  • Slide 26
  • PA LLD: Rx Configuration Step 5: The PA LLD will indicate the appropriate transmit queue index. Using the queue returned from Pa_addMac() in this example. Q643: PDSP3 Q644: PDSP4 Q645: PDSP5 Q646: SA0 Q647: SA1 Q648: GbE SW Q641: PDSP1 Q642: PDSP2 Q640: PDSP0 Q900: RXQUEUE
  • Slide 27
  • Q643: PDSP3 Q644: PDSP4 Q645: PDSP5 Q646: SA0 Q647: SA1 Q648: GbE SW Q641: PDSP1 Q642: PDSP2 Q640: PDSP0 Q900: RXQUEUE PA LLD: Rx Configuration Step 6: The PDSP will return the status of adding the entry to the lookup table.
  • Slide 28
  • Multicore Navigator PKTDMA QMSS CorePac NETCP PA LLD PKTDMA PA SA PA LLD: Rx Configuration Step 9: Forward Result to PA LLD: Pa_forwardResult() Repeat steps 1-9 to add more MAC addresses, IP addresses, or TCP/UDP ports to PA LUTs before receiving data packets. Step 8: Pop descriptor: QMSS_queuePop() Configuration Information Step 7: Once the data has been successfully transferred to host memory by the PKTDMA, the receive flow pushes the descriptor pointer onto the queue specified in the paCmdReply_t structure.
  • Slide 29
  • PA Rx Hardware Processing Step 1: A packet formatted with MAC, IPv4, and UDP headers arrives from the gigabit Ethernet switch subsystem and is routed over the packet streaming switch to the L2 Classify Engine. Q643: PDSP3 Q644: PDSP4 Q645: PDSP5 Q646: SA0 Q647: SA1 Q648: GbE SW Q641: PDSP1 Q642: PDSP2 Q640: PDSP0 Q900: RXQUEUE
  • Slide 30
  • PA Rx Hardware Processing Step 2: PDSP0 in the L2 Classify Engine submits the MAC header for lookup. Assume that the lookup is successful. The packet will then be routed to its next destination. Assume that the destination is L3 Classify Engine 0. PDSP0 LUT1 matches MAC entry. Q643: PDSP3 Q644: PDSP4 Q645: PDSP5 Q646: SA0 Q647: SA1 Q648: GbE SW Q641: PDSP1 Q642: PDSP2 Q640: PDSP0 Q900: RXQUEUE
  • Slide 31
  • PA Rx Hardware Processing Step 3: The packet is routed to the L3 Classify Engine 0. PDSP1 submits the IPv4 header for lookup. Assume that the lookup is successful. The packet will then be routed to its next destination. Assume that it is the L4 Classify Engine. PDSP1 LUT1 matches IPv4 entry. Q643: PDSP3 Q644: PDSP4 Q645: PDSP5 Q646: SA0 Q647: SA1 Q648: GbE SW Q641: PDSP1 Q642: PDSP2 Q640: PDSP0 Q900: RXQUEUE
  • Slide 32
  • PA Rx Hardware Processing Step 4: The packet is routed to the L4 Classify Engine. PDSP3 submits the UDP header for lookup. Assume that the lookup is successful. The packet will then be routed to its next destination. Assume that the destination is the host on queue 900. PDSP3 LUT2 matches UDP entry. Q643: PDSP3 Q644: PDSP4 Q645: PDSP5 Q646: SA0 Q647: SA1 Q648: GbE SW Q641: PDSP1 Q642: PDSP2 Q640: PDSP0 Q900: RXQUEUE
  • Slide 33
  • Q643: PDSP3 Q644: PDSP4 Q645: PDSP5 Q646: SA0 Q647: SA1 Q648: GbE SW Q641: PDSP1 Q642: PDSP2 Q640: PDSP0 Q900: RXQUEUE PA Rx Hardware Processing Step 5: The packet is routed from the L4 Classify Engine, through the packet streaming switch to the PKTDMA controller. The PKTDMA will then transfer the packet from the NETCP to host queue 900.
  • Slide 34
  • Multicore Navigator PKTDMA QMSS CorePac NETCP PA LLD PKTDMA PA SA PA LLD: Receive Packets from Ethernet Step 1: Once the data has been successfully transferred to host memory by the PKTDMA, the Rx flow pushes the descriptor pointer onto the queue specified in paRouteInfo_t structure. Step 2: Pop descriptor to process the packet: QMSS_queuePop() Receive Data Packet Steps 1-2 are repeated for all RX data packets.
  • Slide 35
  • Multicore Navigator PKTDMA QMSS CorePac NETCP PA LLD PKTDMA PA SA PA LLD: Send Transmit Packet Step 1: Call PA LLD API: Pa_formatTxRoute() Step 2: Receive formatted TX command, to be used as PSData. Repeat steps 1-4 to create more send more TX packets. Transmit Configuration Step 4: PKTDMA automatically pops descriptor from Tx queue and sends packet to NETCP. After PKTDMA finishes the data transfer, the TX descriptor is returned to the specified packet completion queue. Step 3: Set TX route, link packet, and push onto Tx queue: /* Put TX route info into PSData */ Cppi_setPSData() /* Link Packet */ Cppi_setData() /* Send command to PA */ QMSS_queuePush() Transmit Data Packet
  • Slide 36
  • PA Tx Hardware Processing Q643: PDSP3 Q644: PDSP4 Q645: PDSP5 Q646: SA0 Q647: SA1 Q648: GbE SW Q641: PDSP1 Q642: PDSP2 Q640: PDSP0 Q900: RXQUEUE Step 1: A packet is placed in the Tx queue for PDSP4. The PKTDMA transfers the packet to the Modify/Multi-route Engine 0.
  • Slide 37
  • Q643: PDSP3 Q644: PDSP4 Q645: PDSP5 Q646: SA0 Q647: SA1 Q648: GbE SW Q641: PDSP1 Q642: PDSP2 Q640: PDSP0 Q900: RXQUEUE PA TX Hardware Processing Example Step 2: PDSP4 generates the checksums, writes them to the packet, and transmits the packet to the queue for the GbE switch via the PKTDMA. PDSP4 generates the checksums in the packet.
  • Slide 38
  • PA TX Hardware Processing Example Q643: PDSP3 Q644: PDSP4 Q645: PDSP5 Q646: SA0 Q647: SA1 Q648: GbE SW Q641: PDSP1 Q642: PDSP2 Q640: PDSP0 Q900: RXQUEUE Step 3: The PKTDMA will transfer the packet from the GbE switch transmit queue through the PKTDMA controller and the packet streaming switch in the NETCP to the GbE switch subsystem for transmission over the network.
  • Slide 39
  • For More Information For more information, refer to the following KeyStone device documents: Network Coprocessor (NETCP) User Guide http://www.ti.com/lit/SPRUGZ6 http://www.ti.com/lit/SPRUGZ6 Packet Accelerator (PA) User Guide http://www.ti.com/lit/SPRUGS4 http://www.ti.com/lit/SPRUGS4 For questions regarding topics covered in this training, visit the support forums at the TI E2E Community website.TI E2E Community
  • Slide 40
  • Back-Up
  • Slide 41
  • Management Data IO The MDIO module manages up to 32 physical layer (PHY) devices connected to the Ethernet Media Access Controller (EMAC). The MDIO module allows almost transparent operation of the MDIO interface with little maintenance from the DSP. The MDIO module enumerates all PHY devices in the system by continuously polling 32 MDIO addresses. Once it detects a PHY device, the MDIO module reads the PHY status register to monitor the PHY link state. The MDIO module stores link change events that can interrupt the CPU. The event storage allows the DSP to poll the link status of the PHY device without continuously performing MDIO module accesses. When the system must access the MDIO module for configuration and negotiation, the MDIO module performs the MDIO read or write operation independent of the DSP. This independent operation allows the DSP to poll for completion or interrupt the CPU once the operation has completed.
  • Slide 42
  • Network Layers Network layers define a hierarchy of services delineated by functionality. Each layer can use the functionality of the next layer below, and offers services to the next layer above. The packet accelerator sub-system examines and routes packets based on fields in up to three layers of the Ethernet packets or L0-L2 header of the SRIO packets. In layer 2, the MAC (Media Access Control) layer, the sub-system classifies IEEE 802.3 packets based on (optionally) the destination MAC, source MAC, Ethertype, and VLAN tags. In Layer 3, the network layer, IPv4 (Internet Protocol Version 4) and IPv6 (Internet Protocol Version 6) packets are routed based (optionally) on source IP address, destination IP address, IPv4 protocol, IPv6 next header, IPv4 Type of Service (recently changed to IPv4 differentiated service in RFC 2474), IPv6 traffic class, and IPv6 flow label. For IP packets with security services the SPI (Security Parameters Index) is also included in the classification information. For IP packets with SCTP (Stream Control Transmission Protocol) the SCTP destination port is also included in the classification information. In layer 4, the transport layer, UDP (User Datagram Protocol) and TCP (Transmission Control Protocol) packets are routed based on the destination port. However, the GTP-U (GPRS Tunnelling Protocol User Plane) over UDP packets are routed based on its 32-bit TEID (Tunnel ID). For SRIO (Serial RapidIO), L0-L2 header information the sub-system classifies SRIO packets based on (optional) the source ID, destination ID, transport type, priority, message type, SRIO type 11 mailbox and letter, SRIO type 9 stream ID and class of service.
  • Slide 43
  • Receive Path All received packets from Ethernet and/or SRIO are routed to PDSP0 PDSP0 does L0-L2 (MAC/SRIO) lookup using LUT1-0. If the packet is IP, it is forwarded to PDSP1 PDSP1 does the outer IP or Custom LUT1 lookup using LUT1-1 PDSP2 does any subsequent IP or Custom LUT1 lookup using LUT1-2 PDSP3 does all TCP/UDP and Custom LUT2 lookup using LUT2 PDSP4 is used for post-lookup processes such as checksum/CRC result verification. PDSP4/5 can be used for pre-transmission operation such as transmit checksum generation.
  • Slide 44
  • Receive path - Continue With the exception of some initial setup functions, the module does not communicate directly with the sub-system. The output of the module is a formatted data block along with a destination address. The module user must send the formatted data to the sub- system. This is typically done by linking the created data block to a host packet descriptor, and then using the addressing information to send the created packet to the sub-system through the queue manager and PKTDMA.
  • Slide 45
  • Tx path For packets to the network, the sub- system provides ones complement checksum or CRC generation over a range provided by the module user. The range is not determined by sub-system by parsing the to-network packet, since it is assumed that the creator of the packet already has the start offset, length, initial checksum value and etc.
  • Slide 46
  • Tables Look-up The low level driver maintains two tables of layer 2 and layer 3 configuration information. The memory for these tables is provided by the module user at run time. The module maintains ownership of these tables and the module user must not write to the memory once provided to the module. In multi-core devices the module can be used in two different configurations. In independent core mode each core in a device has a unique set of tables. Although it is legal for any core to reference handles from other cores, this is not typically done. In this case cache coherency and cross core semaphores are not implemented by the module user. In common core mode there is only one set of tables and they are shared by all cores. Each core that uses the module must initialize it, but each core will provide the exact same buffers to the module. The module user will have the first core to initialize the module including initializing the tables. The module and table contents will be accessed by other cores through global accessible module instance handle and L1/2 entry handles. Other cores will initialize their internal state but not initalize the table. In this mode cache coherency and cross core semaphores must be implemented by the module user to insure the integrity of the tables.