34
The Future of Enterprise Data Center Networking: An Analytical Report submitted to J. J. Ekstrom, Brigham Young University April 10, 2014

Enterprise Data Center Networking (with citations)

Embed Size (px)

Citation preview

Page 1: Enterprise Data Center Networking (with citations)

The Future of Enterprise Data Center Networking:

An Analytical Report

submitted to J. J. Ekstrom, Brigham Young University April 10, 2014

Jonathan Williams

Page 2: Enterprise Data Center Networking (with citations)

Table of Contents

The Future of Enterprise Data Center Networking: An Analytical Report. . . . . . . . . i

Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv

The Future of Enterprise Data Center Networking: An Analytical Report . . . . . . . . . 1

Physical Topologies (Physical Configurations) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Common Bus Topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Ring Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Star Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Mesh Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Fully Connected Network Topology (subset of mesh network topology) . . . . . 5

Routing Methods (Logical Configuration) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Spanning Tree Protocol (STP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Rapid Spanning Tree Protocol (RSTP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Multiple Spanning Tree Protocol (MSTP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Some answers for what STP didn’t account for . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Data Center Bridging (DCB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Multiple Link Aggregation (MLAG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Why new methods are needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2

Page 3: Enterprise Data Center Networking (with citations)

Gigabit links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10Virtual Machines (VMs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Video Streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Routing Methods (Logical Configuration) take 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

The Big 2 replacements for STP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Transparent Interconnection of Lots of Links (TRILL) . . . . . . . . . . . . . . . . 12Shortest Path Bridging (SPB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

The Third option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Software Defined Networking (SDN) . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . 14

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3

Page 4: Enterprise Data Center Networking (with citations)

List of Figures

Figure 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Figure 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Figure 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Figure 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Figure 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Figure 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Figure 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4

Page 5: Enterprise Data Center Networking (with citations)

Abstract

Computer Networking has progressed from the early days of connecting two

computers together to what it is now, connecting vast numbers of device together.

When multiple bridges are connected, and thus allowing for ever bigger networks,

broadcast storms can bring the network to a crawl.

Spanning Tree Protocol was developed to block the redundant links that

cause broadcast storms. With pricier technologies, and motivation to utilize every

spec of performance, shutting down links is now too much of a waste of money and

resources. New industry standards are emerging that removes the need for

shutting down links and can actually use multiple paths to increase performance

cumulatively.

The two methods backed by industry standards, and big companies, are

TRILL and SPB. Trill allows smarter bridges, as the backbone, to connect to each

other and to older infrastructure. The routes that information takes are dynamically

determined. SPB also uses new bridges but they determine the paths that will be

used in advance and the bridges work out the routes in advance. SDN is the open

source answer to the Spanning Tree limitations. The brains of the bridges are

unneeded and everything is controlled from a standard server that tells all the

bridges what to do and how to do it. This method allows the use of cheaper bridges

and a mix of vendors. It is unclear which of the three methods will dominate the

future.

5

Page 6: Enterprise Data Center Networking (with citations)

6

Page 7: Enterprise Data Center Networking (with citations)

The Future of Enterprise Data Center Networking: An Analytical

Report

In our modern computerized world, a system of storing accessible electronic

information occurs in a place called a data center. These data centers regularly

communicate, or network, with other data centers, as well as with individual

computers. This paper will not concern itself with the communication or serving up

of information to entities outside the data center. Rather, it will concern itself with

the networking found within the data center itself, which occurs between what are

known as switches, routers, servers, and storage devices. Furthermore, it will be

the purpose of this paper to explain the current pattern of networking within a very

large, or enterprise data center, which has been stable networking pattern for the

past decade, and suggest some possible future directions of networking within a

data center that are called for because of the new technologies and new demands.

New technologies often solve old problems; however these technologies often

create new problems, for which new technologies must be invented to solve.

Sometimes old technologies can be adapted to solve the new problems, which again

may create their own problems as they go along.

Enterprise Data Center Networking has some issues to overcome, but to see

where the answers will come from we need to look at what was done before.

Physical Topologies (Physical Configurations)

7

Page 8: Enterprise Data Center Networking (with citations)

When two devices need to communicate with each other the answer is simple

and intuitive; connect them together. When three or more devices need to

communicate the answer is not so intuitive and gets increasingly complex,

especially as the number of devices goes up.

To deal with the complexities several methods, configurations or topologies,

have been used. Costs, robustness and ease of deployment have all been factors in

this evolution. Below, in figure 1, is a simplified diagram of various network layouts

found at conceptdraw.com.

fig. 1http://www.conceptdraw.com/How-To-Guide/picture/Common-network-topologies.png

8

Page 9: Enterprise Data Center Networking (with citations)

Common Bus Topology

A common bus topology is defined as a shared common line to which each

device, or computer, is connected. The layout for this topology is to 1) lay one main

cable (or bus), 2) add a terminator (or resistor used to prevent signal reflection) to

each end, and 3) simply tap into the line to connect each additional computer.

Advantages - One advantage of this configuration is that it was the original

system used within computer hardware and therefore conceptually understood by

computer pioneers. Another advantage of the common bus topology is that it uses

less cable length than a typical star layout (see below), and it works well with small

networks.

Disadvantages - When one taps into the cable it is easy to break the center

wire and not see that it was broken. This break will take down the entire network

because terminators are needed at each end of the main cable, and it is difficult to

discover where the break occurred when there are multiple devices or computers.

Another disadvantage is that every computer hears what every other computer says

that is connected to the bus. This also means that only one computer can talk at a

time, so collisions (or talking over each other) are common, especially as more

computers are added.

Ring Network Topology

A ring network topology is defined as a series of devices, or computers,

connected one to another until the tail end device directly connects to the first one.

This requires two network interface cards (NIC) per device so that the device can

talk to each of its neighbor computers.

9

Page 10: Enterprise Data Center Networking (with citations)

Advantages - It is easy to add an additional computer by simply inserting it

between two others (so two more communication links are needed, one going left

and one going right). In this topology, every computer knows whose turn it is to

talk, thus avoiding collisions, because a communication protocol was invented

specifically for the ring topology that acts like a talking stick: no one can talk unless

they have the “stick”. Because of this protocol the ring network topology works

better than the common bus topology when the network is large.

Disadvantages - In the early days of computing a NIC was expensive and

this configuration requires two per device. Another disadvantage was that a break

in the connection causes communication to stop at that break and it is difficult to

know where to start looking for the problem. A third disadvantage, which is shared

with the common bus topology, is that only one computer can talk at a time. A

fourth disadvantage is that the intervening computers can also “hear” the message,

albeit only one side of it.

Star Network Topology

A star network topology is defined as one that has a central communication

device, a hub or switch, that has a direct link to every other device or computer,

forming a star-like configuration. Adding another device means running a cable

from that device to an open port on the switch or hub.

Advantages - This configuration is more robust because if there is a break in

a network line it only affects that one device or computer, and leaves the others

connected and operational. Another advantage is that only one NIC is needed per

device, which was very important when NICs were more expensive than they are

today. When a switch, a smarter central communication device than a hub, is used,

the communication between two computers is not sent to the other computers and

10

Page 11: Enterprise Data Center Networking (with citations)

can occur at the same time as other communication between separate computers

with no collisions.

Disadvantages - When a hub, which is basically a common bus topology in a

box, was at the center of a star network topology (as was the norm in the early days

of computer networking) the issues of collisions and overhearing messages were

the same ones as found in a common bus topology. This disadvantage was solved

when the hub was replaced by a switch, but in the star topology using a switch,

communication can only happen between multiple devices at the same time as long

as they are not trying to talk to a computer that is currently talking to another in

the star network. A decided disadvantage is that switches are significantly more

expensive than hubs, although this has not kept the star network topology from

becoming the standard topology. Another disadvantage is that when the central

communication device fails, whether hub or switch, so does all communication

throughout the star network.

Mesh Network Topology

A mesh network topology is defined as one that connects multiple devices

directly to each other while still utilizing paths that go through other devices. This

requires multiple NICs, as many as there are communication links, forming a criss

cross pattern that looks like a mesh.

Advantages - One of the advantages of the mesh topology is that with

multiple connections the devices can talk directly with one another, which is more

secure, or through interlinkage which means it can handle more demand, and

provide more resources. These multiple connections provide more bandwidth (the

available capacity to communicate information, much like adding another water

11

Page 12: Enterprise Data Center Networking (with citations)

main to a house will increase the amount of water that can be brought into it)

between devices.

Disadvantages - One of the disadvantages is that for each connection a NIC

is required; or in other words, the more connections the more NICs. Another

disadvantage is that with multiple connections, how does the device know which

route to take if multiple paths are available? It may choose to take the direct route

or the interlinked route or do both. If the same piece of the message is being sent

both ways it is both redundant and wasteful.

Fully Connected Network Topology (subset of mesh network

topology)

A fully connected network topology is defined as a mesh network topology

where every device is directly linked to every other device, so that every

communication can be direct, through interlink, or both.

Advantages - With a fully connected network topology allows for the best of

both worlds; devices can talk directly to one device or to all of them at once. Direct

communication is more secure and faster than going through other devices. If

there is a breakdown there is another path available and it is easy to trace where

the fault is and correct it. Another advantage is that multiple paths allow for more

bandwidth.

Disadvantages - The larger the size of the network the more cable and

hardware is required to connect all the devices. When adding one more device it

requires a new link to every other device, meaning another NIC for each device in

the network. If there are 10 devices in the network, each device requires 9 NICs.

If another device is added to the network, each device requires an additional NIC

and another cable to each device.

12

Page 13: Enterprise Data Center Networking (with citations)

Each of the previous 5 topologies builds upon the strengths and tries to

address the weaknesses of their predecessors; and each innovation adds to the cost

and the complexity of the network topology. The next step in the history of

networking was to amplify the number of devices being connected. Instead of the

device being a computer another switch could be connected and add its network

making a network topology of multiple networks.

But this configuration brings about a new challenge, how to connect the

networks without having the whole thing crash? Figure 2 below shows a multiple

network arrangement using switches. A switch learns where to direct a message

and limits all communication to a device to that link. To learn that route the switch

asks every one of its other links, broadcasts a message, asking how to reach the

destination. The issue with this arrangement is that the protocol used in the

network requires that for every message sent there must be a response. The

message runs in circles consuming all switch resources, then the switches become

unresponsive, and traffic stops because the switches are busy transmitting the

broadcast message. This is termed a broadcast storm.

13

Page 14: Enterprise Data Center Networking (with citations)

fig. 2There is a great visual demonstration of this at https://www.youtube.com/watch?v=3JgFpAWR1UU

To resolve the broadcast storm or the problem with routing (which is

choosing which paths to use) methods were added on top of the physical topologies.

Routing Methods (Logical Configuration)

Spanning Tree Protocol (STP)

To deal with the issue of broadcast storms, Radia Perlman came up with the

Spanning Tree Protocol, IEEE 802.1D, which keeps redundant links (the physical

connection), but shuts down (programmatically) the links, which form loops.[1] If

the primary path fails one of the redundant ones is re-enabled. This solved

14

Page 15: Enterprise Data Center Networking (with citations)

broadcast storms but it took 30-50 seconds to figure out the paths. At the time of

its development this was a necessary waiting period, but as technology progressed

this became a stumbling block in networking as that waiting period meant no

network activity could occur on those links, which cost money.

fig. 3http://www.cisco.com/c/dam/en/us/td/i/000001-100000/85001-90000/87001-88000/87816.ps/_jcr_content/renditions/87816.jpg

Steps of Spanning Tree Algorithm

1. Determine the root bridge for the whole network

2. For all other bridges determine root ports

3. For all bridges, determine which of the bridge ports are designated ports

for their corresponding LANs

● The spanning tree consists of all the root ports and the designated

ports.

● These ports are all set to the “forwarding state,” while all other ports

are in a “blocked state.”

● Listening, Blocking, and Disabled are the same (these states do not

15

Page 16: Enterprise Data Center Networking (with citations)

forward Ethernet frames and they do not learn MAC addresses)

Rapid Spanning Tree Protocol (RSTP)

To address the delay in activating links, Radia introduced Rapid Spanning

Tree (RSTP), IEEE 802.1W, which adds a few new states that speed up recovery,

now down to 2-6 seconds. IEEE 802.1D-2004 incorporates RSTP, which made the

original STP standard obsolete. This solved the issue. Once again technology

advanced and when virtual local area networks (VLANs) began to be used a link

being shut down by RSTP might have been the only one providing access to a

particular VLAN.[2]

Multiple Spanning Tree Protocol (MSTP)

The answer was again introduced by Radia, which she called Multiple

Spanning Tree Protocol, or MSTP, IEEE 802.1S later merged into IEEE 802.1Q-

2005. This protocol provides a separate spanning tree for each VLAN group and

blocks all but one of the possible paths for a VLAN. The issue with this protocol is

links are still being shut down so only one path exists. That is the factor that made

STP useful in the first place.

Some answers for what STP didn’t account for

Data Center Bridging (DCB)

Data center bridging applies enhancements to Ethernet, the protocol used in

networking. It allows some, higher priority network traffic to be lossless (making

sure that it reaches its destination). This also allows for some links to have specific

16

Page 17: Enterprise Data Center Networking (with citations)

bandwidth allocated, reserving more for higher priority connections like to a SAN

(storage area network) or for applications that use FoE (fiber over Ethernet).

Multiple Link Aggregation (MLAG)

Multiple link aggregation takes advantage of more wires between switches

and treating the combination of them as if they were one link. Various vendors

(Brocade, Cisco, HP and Juniper) have proprietary versions, that do not

interoperate with each other, as this still a recent technology and a clear winner

has not been decided. Among these vendors the MLAG features are nearly the same

including the limitation that only two core switches, that run MLAG between them,

can be connected together as the high-speed backbone of the network.

Why new methods are needed

MLAG is good enough for most data centers. The time to reassess is when

the ports needed exceed that particular vendor’s MLAG capabilities, more than two

core switches are needed, multivendor network equipment, the equipment is not

MLAG compliant or a mesh network is needed. [3]

Gigabit links

With the advancement in NICs the speeds they can provide has increased, as

they are still relatively new the cost for each is still very expensive. Using STP

would then negate the benefits of that expense by shutting down that connection

until the primary link goes down. This is a waste of all that money and all that

speed. A method that would allow the network to use multiple gigabit links to add

more bandwidth and speed between systems would justify help justify those costs.

17

Page 18: Enterprise Data Center Networking (with citations)

Virtual Machines (VMs)

Virtual machines are being utilized more and more in data centers. They

take up less space in the housing racks, consume less energy (per device) than

those they replace and unused resources from one can be shared with others.

Another benefit of VMs is that they can be migrated, moved from one side of a data

center to another, from a failing device and onto a new one very quickly. That is if

the network can handle the speeds.

Another consideration with VMs is once they are migrated they may not be

physically as close to the resources they access frequently, which will add delays

because of network speeds, when the VM is trying to perform its primary function.

With the increasing popularity of virtual machines in data centers there is a

need for the illusion that everything be connected to everything else with one hop

(or intermediary device the message must pass through). Limiting the hops reduces

latency, or speed delays, and increases performance. Increased usage of VMs

becomes a huge issue if a virtual machine has to be transferred to another physical

host and the link is several hops away or over slower equipment.4 To remedy this

the topology, or layout, of the physical as well as logical connections must be

analyzed.

Big Data

Big data is the term used to describe very, very large amounts of data that

may not seem related. Companies are scouring this data to find connections that

may increase sales. When statistical analysis is being done on this data, fast

connections to the input needs to be available.

18

Page 19: Enterprise Data Center Networking (with citations)

Cloud Computing

Having the ability to rapidly deploy more servers to account for increased

traffic does not do much good if the network traffic going to them cannot account

for the increased demand as well.

Video Streaming

Companies like Netflix, Youtube, and Hulu sending TV and video over the

Internet the bandwidth to send out their content could require a lot of resources

one day and few the next. Think of a Youtube video that goes viral, where some

people see it and send it on to their friends and they send it on and so forth until

this underground method of advertising has made the video extremely popular. One

day the data center hosting the file will not need to devote much to serve the video.

The next day could bring a demand that could take down the network. By being

able to dynamically change the way the backbone of the data center network works

more connections going out of the data center could be created, increasing the

number of people being able to watch it at the same time, and provide more

bandwidth internally to keep the speeds up. The current infrastructures are big and

bulky and take a lot of people a lot of time to reconfigure them.

Routing Methods (Logical Configuration) take 2

The Big 2 replacements for STP

Transparent Interconnection of Lots of Links (TRILL)

The first big replacement for STP is called Transparent Interconnection of

Lots of Links (TRILL). This too was developed by Radia Perlman. She presented it

to the IEEE 802.1 consortium, but was rejected. She then presented it to the IETF

19

Page 20: Enterprise Data Center Networking (with citations)

and it became RFC 5556.[5] It is a simple idea: encapsulate messages in a transport

header with a hop count (a way to determine when to stop forwarding the

message), route the encapsulated messages using IS-IS, and then decapsulate the

native message before delivering it.[6] This is accomplished by chopping off the

forwarding engines in switches and provides for multi-pathing (splitting up the data

and using multiple routes to the destination). See figure 4 below.

Fig. 4http://nanog.org/meetings/nanog50/presentations/Monday/NANOG50.Talk63.NANOG50_TRILL-SPB-Debate-

Roisman.pdf

Multiple RBridges can be linked together and can be incrementally deployed,

plus they are compatible with classic bridges. This means that a massive initial

overhaul cost of the network equipment is not necessary. One simply adds the

RBridges as the need, and funding, arises meanwhile continue to use the existing

infrastructure.

As this is still a recent development there is no one standard to guide all the

industry. There are different vendors with TRILL clones such as Cisco’s FabricPath

(“enhanced” TRILL) and Brocade’s VCS (Virtual Cluster Switching).

20

Page 21: Enterprise Data Center Networking (with citations)

Shortest Path Bridging (SPB)

The other heavyweight contender in the Enterprise Data Center Networking

battle is Shortest Path Bridging (SPB) - IEEE’s answer to TRILL.[7] IEEE 802.1AQ

introduced this after rejecting TRILL. It is a replacement for STP similar to TRILL

in that it provides for multipath routing. The biggest differences are that it is built

on other IEEE standards, uses tree structures (like STP does), and the routes are

symmetric (they use the same paths coming as they do going).

In this configuration switches talk to each other to collectively design the optimal

path.[8] See figure 5 below.

fig. 5http://de.wikipedia.org/wiki/Datei:802d1aqECMP16.gif (split apart)

The Third option

Software Defined Networking (SDN)

The last entry in the Enterprise Data Center Networking arena is Software

Defined Networking (SDN). This method allows a user to manage the network, but

the user has to have the smarts to manage it.

21

Page 22: Enterprise Data Center Networking (with citations)

SDN works by splitting the brains of a regular switch (see figure 6 below)

into a separate control plane and separate forwarding plane.[9] It happens by

chopping off the forwarding engines from bridges and having this centralized

control plane run on regular servers (see figure 7 below). The network can

dynamically react to changes in demand and availability with pre-programmed

responses. Policies can be automated, like giving priority to voice over IP (VOIP)

traffic. It is hardware independent, meaning a multi-vendor data center is not a

problem and cheaper switches can be purchased.[10]

fig. 6http://www.ixiacom.com/solutions/sdn-openflow-test/

22

Page 23: Enterprise Data Center Networking (with citations)

fig. 7http://www.ixiacom.com/solutions/sdn-openflow-test/

Conclusion

In this paper we have reviewed the history of Enterprise Data Center

Networking from its inception to its present operation and challenges. It is

apparent that the technology will continue to develop and will answer the problems

of the present, while at the same time it will bring new challenges to the industry.

TRILL was developed by Radia Perlman, the mind that guided the last 20

years of networking. If she has anticipated and addressed the current needs as well

as she has the previous ones, TRILL may be the future of Enterprise Data Center

Networking.

23

Page 24: Enterprise Data Center Networking (with citations)

If the switches in SPB can be taught what to do with multiple links (perhaps

some type of automatic link aggregation), while bringing the switch costs down,

this could be the future of Enterprise Data Center Networking.

If SDN supporters can get enough bridge vendors to be on board with the

project, since the goal of SDN is to use inexpensive mindless hardware (which is

why the big companies are backing TRILL and SPB), and if the control systems can

be simple to develop and run then SDN could be to networking what Linux is to

operating systems; free, and therefore popular. Becoming as reliable as the

products with the big companies support behind them is what it will ultimately take

to set SDN as the future of Enterprise Data Center Networking.

At this point it is anybody’s game. The decisions of early adopters could drive

the outcome for the whole industry. Which one will be the 8-Track, Betamax, and

HD DVD? Which one will reign supreme as the cassette, VHS, and Blu Ray?

24

Page 25: Enterprise Data Center Networking (with citations)

Bibliography

Eastlake, Donald, Peter Ashwood-Smith, Srikanth Keesara and Paul Unbehagen. "The Great

Debate: TRILL Versus 802.1aq." Address, NANOG50 from NANOG, Atlanta, October 4, 2010.

Ferro, Greg. "Tech Notes: What is Shortest Path Bridging IEEE 802.1aq - Brief - EtherealMind."

EtherealMind. http://etherealmind.com/tech-notes-what-is-shortest-path-bridging-ieee-802-1aq-brief/ (accessed April 16, 2014).

Fratto, Mike. "When MLAG Is Good Enough - Network Computing." Network Computing.

http://www.networkcomputing.com/data-networking-management/when-mlag-is-good-enough/229500378 (accessed April 15, 2014).

Open Networking Foundation. "Software-Defined Networking (SDN) Definition." - Open

Networking Foundation. https://www.opennetworking.org/sdn-resources/sdn-definition (accessed March 11, 2014).

Perlman, Radia. Interconnections: bridges, routers, switches, and internetworking protocols.

2nd ed. Reading, Mass.: Addison Wesley, 2000.

Perlman, Radia. "RFC 5556 - Transparent Interconnection of Lots of Links (TRILL): Problem

and Applicability Statement." RFC 5556 - Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement. http://tools.ietf.org/html/rfc5556 (accessed April 16, 2014).

Rouse, Margaret. "Transparent Interconnection of Lots of Links (TRILL)." What is ?. http://searchnetworking.techtarget.com/definition/Transparent-Interconnection-of-Lots-of-Links-TRILL (accessed April 16, 2014).

"Software-Defined Networking: The New Norm for Networks." Open Networking Foundation.

https://www.opennetworking.org/sdn-resources/sdn-library/whitepapers (accessed April 16, 2014).

Thomas, Gary. "Software Defined Networking (SDN) for the Non-technical CXO." Logicalis CXO

Unplugged. http://cxounplugged.com/2013/10/software-defined-networking-sdn/ (accessed March 11, 2014).

Thomas, Jajish. "Difference between Spanning Tree Protocol (STP) and Rapid Spanning Tree

25

Page 26: Enterprise Data Center Networking (with citations)

Protocol (RSTP)." Difference between Spanning Tree Protocol (STP) and Rapid Spanning Tree Protocol (RSTP). http://www.omnisecu.com/cisco-certified-network-associate-ccna/difference-between-stp-and-rstp.php (accessed April 15, 2014).

Weissberger, Alan. "Infonetics: SDN to Play Big Role in Data Centers/Enterprise Networking;

IEEE ComSocSCV Jan 8, 2014 Meeting: "Open Networking"" ComSoc Community. http://community.comsoc.org/blogs/alanweissberger/infonetics-sdn-play-big-role-data-centersenterprise-networking-ieee-comsocscv- (accessed March 11, 2014).

26

Page 27: Enterprise Data Center Networking (with citations)

1[] Radia Perlman. Interconnections: bridges, routers, switches, and internetworking protocols. 2nd ed. Reading, Mass.: Addison Wesley, 2000.

2[] Jajish Thomas. "Difference between Spanning Tree Protocol (STP) and Rapid Spanning Tree Protocol (RSTP)." Difference between Spanning Tree Protocol (STP) and Rapid Spanning Tree Protocol (RSTP). http://www.omnisecu.com/cisco-certified-network-associate-ccna/difference-between-stp-and-rstp.php (accessed April 15, 2014).

3[] Mike Fratto. "When MLAG Is Good Enough - Network Computing." Network Computing. http://www.networkcomputing.com/data-networking-management/when-mlag-is-good-enough/229500378 (accessed April 15, 2014).

4[] Margaret Rouse. "Transparent Interconnection of Lots of Links (TRILL)." What is ?. http://searchnetworking.techtarget.com/definition/Transparent-Interconnection-of-Lots-of-Links-TRILL (accessed April 16, 2014).

5[] Radia Perlman. "RFC 5556 - Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement." RFC 5556 - Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement. http://tools.ietf.org/html/rfc5556 (accessed April 16, 2014).

6[] Donald Eastlake et al. "The Great Debate: TRILL Versus 802.1aq." Address, NANOG50 from NANOG, Atlanta, October 4, 2010.

7[] Donald Eastlake et al. "The Great Debate: TRILL Versus 802.1aq." Address, NANOG50 from NANOG, Atlanta, October 4, 2010.8[] Greg Ferro. "Tech Notes: What is Shortest Path Bridging IEEE 802.1aq - Brief - EtherealMind." EtherealMind. http://etherealmind.com/tech-notes-what-is-shortest-path-bridging-ieee-802-1aq-brief/ (accessed April 16, 2014).

9[] "Software-Defined Networking: The New Norm for Networks." Open Networking Foundation. https://www.opennetworking.org/sdn-resources/sdn-library/whitepapers (accessed April 16, 2014).

10[] Gary Thomas. "Software Defined Networking (SDN) for the Non-technical CXO." Logicalis CXO Unplugged. http://cxounplugged.com/2013/10/software-defined-networking-sdn/ (accessed March 11, 2014).