27
Software-Defined Networking in the Datacenter Introduction................................................................................................ 2 Facebook Tech Director Reveals Open Networking Plan ............................. 3 Extreme Networks Rolls Out High-End Switch, SDN Framework ................ 6 Google Lifts Veil On “Andromeda” Virtual Networking ............................... 7 Lucera Opens Door On High Frequency Trading Cloud ............................ 10 Regional Datacenter Bracing for 40GbE Demand ..................................... 13 Blue Cross Blue Shield Streamlines Networking, Virtualization ............... 15 OpenDaylight Lifts the Veil on ‘Hydrogen’ SDN Software Stack ................ 17 Cisco Counters OpenFlow SDN with OpFlex, Updates Nexus Switches ...19 Dell Bares Switch Metal to Other Network Operating Systems ................. 21 Extreme Networks Takes the Open Road to SDN....................................... 23 Extreme Networks - Taming the Networking Tiger with Open SDN ........... 25 www.extremenetworks.com Report fee and download underwritten by:

Spotlight On Software Defined Networking in the Datacenter from

Embed Size (px)

Citation preview

Page 1: Spotlight On Software Defined Networking in the Datacenter from

Software-Defined Networking

in the Datacenter

Introduction................................................................................................2

Facebook Tech Director Reveals Open Networking Plan .............................3

Extreme Networks Rolls Out High-End Switch, SDN Framework ................6

Google Lifts Veil On “Andromeda” Virtual Networking ...............................7

Lucera Opens Door On High Frequency Trading Cloud ............................10

Regional Datacenter Bracing for 40GbE Demand .....................................13

Blue Cross Blue Shield Streamlines Networking, Virtualization ...............15

OpenDaylight Lifts the Veil on ‘Hydrogen’ SDN Software Stack ................17

Cisco Counters OpenFlow SDN with OpFlex, Updates Nexus Switches ...19

Dell Bares Switch Metal to Other Network Operating Systems .................21

Extreme Networks Takes the Open Road to SDN .......................................23

Extreme Networks - Taming the Networking Tiger with Open SDN ...........25

www.extremenetworks.com

Report fee and download underwritten by:

Page 2: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 2

Automation comes to every industry, and that includes, however belatedly or ironical-

ly, the IT sector. Over the past several decades, much work has been done to virtu-

alize servers and automate their provisioning and management, and much has also

been done to bring similar virtualization and management capabilities to storage. Now, it is

time for the network to be made ethereal, breaking it out of the confines of physical devices

and human-driven command line interfaces.

Datacenters need automated network configuration, along with intelligence that will allow

for switches and routers (both physical devices and virtual ones running on virtual machines

inside of hypervisors) to be reconfigured on the fly, reacting to current network conditions.

This, at its heart, is what software-defined networking is all about.

This is also a tall order, and as much of a challenge as server virtualization seemed to be

more than a decade and a half ago when it was not just something on mainframes but went

mainstream. Every maker of switches and routers has its own management tools, its own busi-

ness to protect, and its own ideas about how best to do SDN. As is the case with any big idea in

IT, there are arguments over who is more or less open and what is the right technical approach

to making networks more malleable. We’re still in the early days of SDN, and it is important to

keep an open mind in making a choice about what direction to take. One thing IT shops know

for sure is that they can’t keep doing things the same way. They want for Layers 2 through 7 in

the network stack to be malleable and to be programmatically changed as conditions in appli-

cations, on internal networks, and from the Internet change.

SDN is already happening at many large organizations, hyperscale datacenter operators,

and cloud builders. There is more than one way to virtualize and automate the network, just

as there is more than one way to virtualize a server and automate the provisioning and man-

agement of virtual machines. The trick will be to find the approach best suited to a particular

customer, and that is tough given that SDN technology is still new. As always, those that can’t

wait wade in first.

Learn more about some of the SDN developments already taking place in this SpotlightON

Software Defined Networking in the Datacenter sponsored by Extreme Networks.

Welcome to SpotlightON Software-Defined Networking in the Datacenter

Timothy Prickett MorganEditor-in-ChiefEnterpriseTech

Page 3: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 3

F acebook is a classic example of an exponentially growing busi-

ness with extreme scale IT needs that cannot do things in the

data center the way traditional large enterprises do. It would

go broke if it did. Or more precisely, it might have never gotten out-

side of Mark Zuckerberg’s Harvard dorm room.

Facebook’s software and the infra-structure on which it runs are literally the business. The company has been re-lentless in hacking all parts of its stack, from PHP compilers down to servers and out to data center designs. And as EnterpriseTech reports elsewhere, the social media giant has shared what it has learned about building vanity-free serv-ers and advanced data centers through the Open Compute Project, and it has also open sourced its enhancements to PHP. Now, Facebook wants to pry open the network and hack it to make it bet-ter.

Through the Open Compute Net-working project established earlier this year, Facebook is working with all of the major switch and networking chip makers as well as startups in the software-defined networking arena to

bring switching into the modern era, as Najam Ahmad, director of technical operations at Facebook, explained it to EnterpriseTech this week at the compa-ny’s office in New York. Facebook wants switches to be built differently so net-works are easier to build and manage at the scale its business requires.

Timothy Prickett Morgan: I suspect that this Open Compute Networking effort is about more than having van-ity-free switching in Facebook’s data centers. What is the problem you are trying to address with this project?

Najam Ahmad: If you look at net-works today, the fundamental building construct we have is an appliance. It doesn’t matter whose appliance it is, you get hardware and a set of features that are vertically integrated by a ven-

dor. So you pick the speeds and feeds and a set of protocols and you get a command line interface to manage it. If the protocols do what you need, you are good. But if you need a change in the protocol, then you get into a little bit of a fix. Do you go to IETF or IEEE to get a protocol spec modified? Or do you work with the vendor and work with their product managers and maybe six months or a year later you can get that feature that you want.

TPM: Or, you have to buy a com-pletely different switch because vendors have increasingly broad feature sets as they move up their product lines.

Najam Ahmad: I don’t want to pick on any single vendor, but that is how the whole industry is. To keep track of all of the features and protocol sets in a prod-uct line is a problem, but you also get into that rip-and-replace conversation a lot. Any time you have to do physical work, it is expensive and it takes a lot more time. It is a simple physics prob-lem at that point.

What we want to do is bring net-working to the modern age. I will use

Facebook Tech Director Reveals Open Networking Plan (Originally published October 2013 in EnterpriseTech)

Page 4: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 4

a mobile handset as an example. Ten or fifteen years ago, you used to buy a phone from Nokia or Motorola, and you had to pick between the features they had. And when you picked a phone, that was it, that was the phone you had. If you wanted another feature, you had to buy a different phone. That whole ecosystem has changed. Now we have a bunch of hardware suppliers – HTC, Samsung, LG, Apple – and you have operating systems on top – several ver-sions of Android, iOS, Windows – and then you have a bunch of apps on top of that. With smartphones, if you don’t like an app, you get a new one. And if you don’t like any of the apps, you can write your own.

That is where network needs to go. To do that, what we really have to do is disaggregate the appliance into its components, which at the top level are hardware and software.

TPM: Hence, the Open Compute Networking project.

Najam Ahmad: The specification that we are working on is essentially a switch that behaves like compute. It starts up, it has a BIOS environment to do its diagnostics and testing, and then it will look for an executable and go find an operating system. You point it to an operating system and that tells it how it will behave and what it is going to run.

In that model, you can run tradi-tional network operating systems, or you can run Linux-style implemen-tations, you can run OpenFlow if you want. And on top of that, you can build your protocol sets and applications.

TPM: Does the Open Compute Net-work project assume that you will have custom ASICs as we have in all of these switches today, or will it be based on an X86 engine with a bunch of ports hanging off it?

Najam Ahmad: Certain things are specialized. In your phone, for exam-ple, you have a GPS function. You can’t expect the general-purpose ARM chip to do GPS. Switching is like that. When you want 32 ports running at 40 Gb/sec, and you need MPLS [Multiproto-

col Label Switching] as well, there is no way for a general purpose X86 chip to keep up.

The idea is to use commodity, net-work-specific chipsets, but to make them open. All of the network semicon-ductor guys have attended Networking project meetings. We need those ASICs, and we can marry them to X86 proces-sors to do network functions on top of that. The spec is not tied to a particular ASIC or operating system.

That is how OCP wants to do switches and that ties into how Face-book wants to do networks.

TPM: What is your plan for adopting Open Compute networking gear? How long is this going to take?

Najam Ahmad: I don’t write a spec without the idea of getting it into pro-duction. I like to use the car analogy. Every major car manufacturer has a concept vehicle, which has all of the bells and whistles they can think of in it. . . .

TPM: And nobody ever gets to buy it. . . .

Najam Ahmad: True. [Laughter] But at the same time, a bunch of the features on the concept car make it into production. We want to shoot for all of the things we want in open switches, but you can’t boil the ocean. We want hardware disaggregated from software, and we want to deploy that hardware in production. We may do it with a tradi-tional network operating system, or we may write our own. It depends on the pieces and the timing of when we want to go into production.

TPM: I know the spec is not even done yet, but when do you expect to deploy open switches?

Najam Ahmad: We haven’t official-ly set a date. But what I can tell you is that we are far enough along that at the Open Compute Project workshop host-ed by Goldman Sachs this week in New York, we had a box that we booted up and demoed passing traffic. It is a con-tribution from one of the big guys, and I am not allowed to say who because we

are still working through the contracts for them to contribute their IP.

It is still a work in progress. But we passed packets through it and we are further along than I expected at this stage. We still have a lot of work to do.

TPM: All of this open switch work is separate from the silicon photonics work that OCP and Intel were showing off earlier this year at the Open Com-pute Summit. How does this all fit to-gether?

Najam Ahmad: In some sense it is orthogonal, and in some sense it is not. The plan for the open networking proj-ect is to get that disaggregation going in the switch. When we prove that model works, we can take any box in the data center and show that you can do this.

Then you say the rack is now dif-ferent, it is disaggregated. What do we need in that? That is where the silicon photonics comes in. That is a little fur-ther away because it is even more of a concept to disaggregate a server into its components.

TPM: Some people take Intel’s talk about Rack Scale and the related work with OCP in the area of silicon photon-ics and they might walk away with the impression that this is right around the corner. Others point out that Intel has never done 100 Gb/sec networking be-fore and that this might take a bit more time than they are expecting.

And I think it will take a particularly long time to disaggregate memory from processors in this exploded server, but everyone seems to agree that it needs to be done. Sometimes people have to buy a heavy server just because they need more memory and they don’t need the compute, and conversely, if they need a lot of compute but not much memory, you sit around with a bigger physical box with a lot of empty memory slots.

Najam Ahmad: That’s a really hard thing to do, but we are working on it. All of the components in the server have very different lifecycles, and they need to be replaced on their own lifecycle, not on the lifecycle of some box. There are smarter people than me working on

Page 5: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 5

this problem.

TPM: Let’s take a step back and talk about the networks at Facebook. Many of us know how Facebook has evolved from using plain-vanilla serv-ers to custom gear made by Dell’s Data Center Solutions division to creating its own servers and open sourcing them through the OCP. But what have you done with switches?

Najam Ahmad: We have all sorts of flavors, and we have a bunch of OEM stuff in there as well as experimenting with whiteboxes.

In our data centers, there are tens of thousands of machines in them, and we have been building our networks to scale that far. For the past year and a half, we have been deploying 10 Gb/sec ports on the servers, and the net-work fabric is all 10 Gb/sec. We have also taken Layer 2 completely out of the environment. We are an all-Layer 3 network now. We don’t do traditional VLANs, spanning tree – things of that nature – because they don’t scale to our size. So we run BGP [Border Gateway Protocol] all the way to the top of the rack. A lot of the big guys are doing that now – Microsoft, Google.

The core is built in two layers, and we have an optical network that we are building underneath. It connects to all of our data centers and connects to all of our point of presence centers that we serve traffic out of. It is not hundreds or thousands of POPs, but tens to get closer to the end user and reduce the latency.

TPM: So you have 10 Gb/sec in the servers, and you have 10 Gb/sec in Lay-er 3 switching. What does the backbone run at?

Najam Ahmad: We have 40 Gb/sec in the data center core and in the back-bone it is 100 Gb/sec. We have a mixed model there. In some cases we are still leasing, but we are buying dark fiber and moving away from that.

TPM: What about InfiniBand? Does Facebook have any InfiniBand in its networks?

Najam Ahmad: When Ethernet did not have its current capabilities, Infini-Band was too expensive and cumber-some, too. Ethernet has kept growing and improving, and Ethernet is good enough for us. At scale, we can do the same things with Ethernet and it doesn’t make sense for us to change it. At a net-work interface level, we are not busting out of 10 Gb/sec Ethernet. There are some applications, like Hadoop, where we are pushing it.

Latency is always important, but we don’t try to shave off microseconds. Milliseconds we try to shave off, but not microseconds. We are not after the ul-tra-low-latency stuff, where InfiniBand can help.

TPM: So now let’s talk about what software-defined networking means to Facebook. You don’t have virtual ma-chines, you don’t use virtual switches. All of your applications run on bare metal. You have different networking issues compared to a typical enterprise with a lot of virtualized applications.

Najam Ahmad: The central problem we have is agility. The pace at which our applications, and other things like storage or Hadoop, move is much fast-er than the network can move, primar-ily because of the environment that I described of an appliance with a very closed system and the only interface you have into the system is a command line interface.

TPM: You can’t hack it, so you can’t break it, and then fix it.

Najam Ahmad: You can’t hack it at all, which is what we do and what we do best.

We like the work to be done by ro-bots, and the people to build the robots, which is essentially software. So we want to build software. We don’t want to have people sitting in front of large monitors watching alerts, and clicking and fixing things. If we see a problem a couple of times, we automate it. If you don’t have the hooks in these boxes to do that – if someone has to log into a box, do a bunch of commands, and reboot it – we are going to need an army of

people at our scale. And it will be slow. And it will cause outages. Software does things much faster and more reliably.

TPM: How do you manage your network now? Did you create your own tools because no one has all this magical SDN stuff?

Najam Ahmad: Yeah, we created our own tools. We still manage the network through software, but it is much harder to do than it needs to be. Let me give you a concrete example.

A few months ago, we were seeing some issues with Memcached in our environment, with transactions taking longer and there being a lot of retrans-missions. So we were trying to debug it to find out what the heck was going on. And we just couldn’t find it. And this went on for a couple of weeks. And then our switch vendor and one of its devel-opers came out to help us troubleshoot. And the developer said, “Wait, hang on, let me log into the ASIC.” This is a cus-tom ASIC. There was a hidden com-mand, and he could see that the ASIC was dropping packets. And we had just wasted three weeks looking for where the packets were going. They had a se-cret command, and the developer knew it, but the support staff didn’t and it wasn’t documented.

We figured this out at 5:30 in the evening, and we had to log into every damned ASIC on hundreds of boxes – and most of them have multiple ASICs per box – and you run this command, it throws out text, and you screen-scrape it, you get the relevant piece of data out, and then you push it into our automat-ed systems, and the next morning there were alerts everywhere. We had packet loss everywhere. And we had no clue.

And you just shake your head and ask, “How did I get here?” This is not going to work, this is not going to scale.

This is the kind of thing I want to get rid of. We want complete access to what is going on, and I don’t want to fix things this way. We want to run agents on the boxes that are doing a lot of health checking, aggregate their data, and send it off to an alert management system. SNMP is dead.•

Page 6: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 6

Upstart Extreme Networks rolled out a new software-defined

network architecture and a high-end switch during Interop’s

2014 conference in Las Vegas.

San Jose-based Extreme Networks claimed its BlackDiamond X8 offers “cloud-scale switching” using a new four-port to 100 Gb/sec line card for the modular switch. Overall switching ca-pacity for the X8 modular switch ranges from 10.24 Tb/sec to 20.48 Tb/sec, al-lowing customers to scale up their net-works as traffic grows. The new switch blade for the X8 has four ports running at 100 Gb/sec and uses CFP2 connec-tors, just like a number of other 100 Gb/sec switch modules announced this week. Using SR10 transceivers, switch-es can be up to 100 meters apart from devices, and using LR4 transceivers, distances can span up to 10 kilometers. Extreme Networks says it will support ER4 transceivers when they become available, allowing distances of up to 40 kilometers between the switches and the devices they connect to. The new switch typifies a key trend in the networking sector toward scalable com-ponents that enable a highly virtualized network infrastructure.

Another trend is the adoption of software-defined network (SDN) ar-chitectures that help optimize network performance through increased virtual-ization. Extreme Networks said its X8

switch supports standard OpenFlow to work with third party or open source SDN controllers.

The company also touts the X8 as being capable of leveraging hybrid networks to handle simultaneous SDN and non-SDN deployments. The result is said to be faster connections between datacenters, through the network core, and out to the access edge.

The X8 switch also has Intel’s du-al-core 2 GHz Core i7 processor in it, allowing the switch to run software that performs other network functions alongside the ExtremeXOS network operating system, the company said.

Meanwhile, Extreme Network’s latest SDN architecture seeks to define and control networks while distributing network smarts throughout the net in-frastructure. On the wireless side, this network intelligence is designed to pro-vide visibility and awareness of mobile devices and applications attached to the network. Features include real-time Wi-Fi reporting, locating and diagnos-tics. The company also claims its SDN framework speeds the provisioning and management of new network services.

This week’s hardware and software rollouts seek to bolster Extreme Net-

works’ strategy of combining high-end wired and wireless hardware operating within an intelligent software-defined architecture. Together, these network components could be used either as building blocks for intelligent datacen-ters or to allow mobile users to connect using the device of their choice.

Companies like Extreme Networks are also hoping to compete with market leaders like Cisco Systems by targeting the exploding mobile device market. For example, market researcher IDC estimates that global smartphone sales topped 1 billion in 2013.

Meanwhile, the BYOD (bring your own device) movement may be cre-ating new opportunities as network managers are being forced to cope with complex user and device provisioning. One result has been the emergence of enterprise mobile device management systems required when employees are away from corporate networks.

To that end, Extreme Networks said its SDN architecture seeks to de-liver unified management and analytics software running on top of, for exam-ple, 100 Gb/sec Ethernet or 802.11ac wireless networks.

Extreme Networks said its Net-Sight 6.0 management software along with OneFabric SDN Connect 2.0 soft-ware both ship in April. Also shipping in April are the 100 Gb/sec Black-Diamond X8 blade and 3800 series 802.11ac wireless access points. •

Extreme Networks Rolls Out High-End Switch, SDN Framework(Originally published April 2014 in EnterpriseTech)

Page 7: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 7

The biggest public cloud providers have adjacent businesses that

actually fund the development of the infrastructure that starts

out in their own operations and eventually makes its way into

their public cloud. So it is with Google’s “Andromeda” software-defined

networking stack, which the company was showing off at Interop.

The Andromeda network virtual-ization stack is part of the Google net-work, which includes a vast content distribution network that spans the globe as well as an OpenFlow-derived wide area network that has similarly been virtualized and which also spans the globe. Amin Vahdat, distinguished engineer and technical lead for net-working at Google, revealed in a blog post that after using the Andromeda SDN stack internally on Google’s home-grown switches, servers, and storage that it has been adopted as the under-pinning of the Cloud Platform public cloud services such as Compute Engine (infrastructure) and App Engine (plat-form). The news was that Andromeda was the default networking stack in two of the several Cloud Platform regions – us-central1-b and europe-west1-a – and that the company was working to roll it out into the other regions as fast as was practical to make its own life easier and to give customers higher-perform-ing networking for their cloudy infra-structure.

A month ago, at the Open Network Summit, Vahdat gave the keynote ad-dress, and he talked quite a bit more about Andromeda than in the blog

post and explained how it related to the global network at Google. But before all that, he also explained why it was im-portant to have virtualized, high-band-width, low-latency networking available for workloads, and cautioned everyone that the introduction of new technol-ogies that push scale up or out always cause issues. Even at Google, which has no shortage of uber-smart techies.

“In the cloud, you can gain access to the latest hardware infrastructure, at scale and with the expertise to make it work,” explained Vahdat. “So I think this is key. Any time you go from 1 Gb/sec to 10 Gb/sec to 40 Gb/sec to 100 Gb/sec, or from disk to flash to phase change memory or whatever your favorite next generation storage infrastructure is, things are going to break. Basically, there are going to be assumptions built throughout your infrastructure that will make it impossible to leverage that new technology. So, you will put in your 40 Gb/sec network, and nothing goes any faster. And in fact, maybe it goes slow-er. You put in your flash infrastructure, and nothing goes any faster. It takes a huge amount of work to leverage this new technology, and in the cloud you have the opportunity to do it once and

reap the benefits across many, many services and customers.”

In short, Vahdat says that compa-nies will be attracted to cloud comput-ing because of the on-demand access to compute and storage and utility pric-ing, but they will stay for a lot of other reasons, chief among them the virtual-ized networks and related services to monitor, manage, protect, and isolate the links between servers, storage, and the outside world.

By his own definition, software-de-fined networking means splitting the control plane of the network from the data plane, which allows the indepen-dent evolution of both parts of the net-work stack. By doing so, you can put the control plane on commodity servers (as many networking vendors are starting to do with OpenFlow and other pro-tocols) and use other gear in the data plane. In Google’s case, explained Vah-dat, Andromeda splits the network vir-tualization functions between these soft switches above and fabric switches and commodity packet processors that shift the bits around the network.

Andromeda also hooks into clus-ter routers that link networks to each other within a datacenter or into the B4 wide area network that connects Google’s region’s together into what is, in effect, a massive virtual Layer 2 net-work with all of its 1 million plus servers attached. (Yes, Google did this several years ahead of the rest of the IT indus-try, as it often does with technologies.

Google Lifts Veil On “Andromeda” Virtual Networking(Originally published April 2014 in HPCwire)

Page 8: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 8

Those who scale break things first and therefore have to fix things first.)

The B4 WAN is based on home-made routers that have hundreds of ports running at 10 Gb/sec speeds (one of them is shown in the image at the top of this story); they are based on merchant silicon (Google does not say which ASICs it is using) and presum-ably an X86 processor as well and run Google’s own tweaked version of Linux that has been hardened for use in net-work gear. These G-Scale switches use the open source Quagga BGP stack for WAN connectivity, and have ISIS/IBGP for linking internally to the networks inside Google’s datacenters. They have support for the OpenFlow protocol as well. The G-Scale machines have hard-ware fault tolerance and have multiple terabits per second of switching band-width. The WAN has two backbones: the outward facing one that links into the Internet and content distribution networks and the internal one that is used for Google’s own workloads, which includes a slew of things aside from its search engine and Cloud Platform.

Google has done away with a slew of boxes in the middle that provide load balancing, access control, firewalls, network address translation, denial of

service attack mitigation, and other services. All of this specialized hard-ware complicates the topology of the network, said Vahdat, and it also makes maintenance and monitoring of the network difficult. Importantly, storage is not something that hangs off a server in the Google network, but is rather a service that is exposed as storage right on the network itself, and systems at any Google region can access it over the LAN or WAN links. (Well, if they have permission, that is.)

Like many of the massive services that Google has created, the Androm-eda network has centralized control. By the way, so did the Google File System and the MapReduce scheduler that gave rise to Hadoop when it was mimicked, so did the BigTable NoSQL data store that has spawned a number of quasi-clones, and even the B4 WAN and the Spanner distributed file system that have yet to be cloned.

“What we have seen is that a log-ically centralized, hierarchical control plane with a peer-to-peer data plane beats full decentralization,” explained Vahdat in his keynote. “All of these flew in the face of conventional wisdom,” he continued, referring to all of those proj-ects above, and added that everyone

was shocked back in 2002 that Google would, for instance, build a large-scale storage system like GFS with central-ized control. “We are actually pretty confident in the design pattern at this point. We can build a fundamentally more efficient system by prudently le-veraging centralization rather than try-ing to manage things in a peer-to-peer, decentralized manner.”

Having talked about the theory of network virtualization and SDN, Vah-dat explained in a very concrete way why virtualizing the network along with compute and storage was key, particularly for Cloud Platform, which will have workloads that Google cannot control or predict so easily.

Not too far in the future, this is what the underpinnings of a cloud is going to look like:

A compute node with two sockets will have 32 cores running at around 2.5 GHz or so, and using the other Amdahl Law – you should have 1 Mb/sec of I/O bandwidth for every 1 MHz of computation for a balanced system, that puts you on the order of 100 Gb/sec coming out of that server node. The storage disaggregated from the server will be terabytes of flash, with 100,000s of I/O operations per second

Page 9: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 9

and 100 microseconds of access time. A cluster will have perhaps 1,000 vir-tual machines and to get balance be-tween the systems and the storage, that means you will need a virtual network between them that delivers around 100 Tb/sec of bisection bandwidth and 10 microseconds of latency. Moreover, these bandwidth needs will change as the workloads on the systems change – they may be more or less intensive when it comes to CPU, memory, storage, or I/O capacities.

“We are going to need a fundamen-tal transformation in virtual network-ing,” Vahdat explained. “We are going to need 10X more bandwidth, 10X lower latency, 10X faster provisioning, and 10X the flexibility in being able to pro-gram the infrastructure to support such a programming model.”

Now you know why Google has long since taken control of its infrastructure, from every piece of hardware on up to the most abstract layers of software. It is always facing such scalability issues, and it has to squeeze out all the perfor-mance it can to stay ahead of the com-petition.

So how does Andromeda perform? Pretty well, according to Google’s own benchmark tests. The blog doesn’t have very much performance data, but the presentation from the Open Network Summit has a bit more. Before An-dromeda was rolled out, Google had an earlier rendition of SDN with network function virtualization, which Vahdat said was roughly equivalent to the state-of-the-art available commercially today from network vendors. After analyzing these numbers, you might be wishing Google would sell you Andromeda and a stack of its IT gear. Here is how net-work performance stacks up at Google:

In this chart, the network stack performance transmitted between two virtual machines on Cloud Platform is shown for three scenarios. The blue bar is for the pre-Andromeda SDN/NFV setup, and it shows a relatively modest datapath throughput between virtual machines on distinct servers on one TCP stream. The Andromeda network stack is shown in red, and the gold bars

show what happens when Andromeda is transmitting data between virtual machines on the same host. Vahdat says that Google will be trying to close the gap between the red and gold bars. If you run more TCP streams between the VMs, Andromeda actually does quite a bit better, as you can see. So it scales well.

Raw speed is interesting of course, but if attaining it eats up all of the CPU capacity in the box, then no work gets done by the server. So Google is also measuring how much CPU the An-dromeda stack chews up. Take a look:

To express this, the Google bench-marks normalize against the pre-An-dromeda network stack and show how many bytes per CPU cycle can be pushed. In this case, as you can see, Andromeda does roughly five times the compute per cycle as its predecessor. You can also see that adding TCP streams to the test eats into the CPU capacity, but that is to be

expected. Shifting bits is not free.The question now, as always, is

what Google’s revelations will have on` the industry at large. Much of the soft-ware that Google started with is based on open source code, but it has been tai-lored so tightly to Google’s homegrown hardware and software that putting out a set of APIs or even source code would probably not be all that helpful. (Un-less you were competing with Google, of course.) The kernel data path exten-sions that Google has come up with as part of Andromeda have been contrib-uted to the Linux community, so that is something.

No matter what, Google has shown very clearly what the practical benefits of SDN are and now large enterprises will be more comfortable wading in – perhaps starting first by firing up in-stances on Compute Engine and App Engine to run their some of their ap-plications. •

Page 10: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 10

If you think supercomputing is challenging, you should try high

frequency trading.

High frequency trading and traditional supercomputing simu-

lation have plenty in common, but there are big differences, too. They

both require extreme systems, with HFT systems focusing on latency

and speed while supercomputer clusters are designed for large scale and

capacity; both have employed coprocessors to help boost the overall per-

formance of underlying systems.

Supercomputers try to model something in the physical world – how a protein folds or how gasoline burns in an engine – while HFT systems are try-ing to model something that only exists in the electronic world of finance. And importantly, HFT systems are looking for patterns in the buying and selling of assets occurring at the nanosecond level that are, by and large, being gener-ated by other HFT systems. When HFT systems change their behavior to try to make money, the behavior of the entire system starts changing and everyone has to go back to the drawing board and recreate their models.

Here’s how Jacob Loveless, the CEO of Lucera, a new cloud dedicat-ed to high frequency trading, liquidity matching, and foreign exchange, ex-plains the underlying frustration of this business. “HFT model development is like you discover gravity one day and

codify the natural law, and the next day it stops working and you have to start all over again.”

The Lucera cloud is a spinout from New York-based financial services firm Cantor Fitzgerald, one of the early in-novators in high frequency trading that actually left the field a few years back. Loveless is a former Department of Defense contractor who is an expert in data heuristics (and who cannot talk about the classified work he did for the US government), and he came back to Wall Street to start Cantor Lab, a re-search and development group that, among other things, built HFT systems for the financial firm and initially fo-cused on Treasury bonds, not equities, back in the early 2000s.

“What we found is we needed to go down to this really low level of the data, that you couldn’t aggregate it,” ex-plains Loveless. “You needed the data

raw in order for any of the patterns in the data to actually be meaningful or dependable. So you could not, for ex-ample, look at bond movements from one day to the next, but you had to look at bond movements from one minute to the next minute. When you get to a small enough timescale, all of a sudden all of these patterns get to be somewhat dependable.”

Meaningful patterns emerged at the millisecond level in the financial data back in 2003 and 2004, says Love-less. “Making a system react in a couple of milliseconds was not that difficult. It was difficult enough that you couldn’t write garbage code, but it wasn’t impos-sible.”

But then everybody in the financial industry figured out how to do high frequency trading, and it became a sys-tems and networking arms race. So if you wanted to trade in equities, for ex-ample, you had to move your systems into co-location facilities next to the ex-changes, and it got so contentious that the New York Stock Exchange, which operates its data center in Mahwah, New Jersey, had to give all HFT cus-tomers wires that were exactly the same length as they hooked into their systems so they would have the same latency.

“By 2008 and 2009, this was cra-zy,” says Loveless. “Systems had gotten to the point of absurdity where these

Lucera Opens Door On High Frequency Trading Cloud(Originally published October 2013 in EnterpriseTech)

Page 11: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 11

opportunities only existed in the mi-crosecond range. We were building sys-tems that could react to patterns – take in information and look at it against a hash table and do something – in under 50 microseconds. To give you some per-spective, 50 microseconds is the access time for a solid state drive.”

Networking between exchanges similarly got crazy. About this same time, some traders figured out that the latency on microwave communica-tions was lower than for signals going through fiber optic cables, and sudden-ly there were microwave links between New York and Chicago, and soon there were links connecting financial cen-ters up and down the Eastern seaboard and across Western Europe. And these days, people are using millimeter band communications links, says Loveless, because microwave links are too slow.

The servers underneath high-fre-quency trading systems kept getting beefier, and the use of field program-mable gate array (FPGA) coprocessors proliferated inside of systems, inside of switches (particularly those from Aris-ta Networks), and inside of network adapter cards (from Solarflare). Love-less knows traders who run full-on trading systems on that Arista 7124FX switches, using 24 in-bound ports to get data from the exchanges, have a model coded in the FPGA, and do trading from the switch instead of from servers. But, oddly enough, using FPGAs has fallen out of favor because the models in high frequency trading are changing too fast for FPGA programmers to keep up.

“The reason is that you need to change the models too often,” says Love-less. “The development cycle working in Verilog or VHDL is too long. Even if you get the greatest Verilog programmer ever, you are still talking about turning models around in weeks, not days.”

As this HFT escalation was reach-ing a fever pitch, Cantor Fitzgerald took a step back three years ago, says Love-less, and projected that the money the firm would be making on a daily basis and what it would be spending on in-frastructure was not going to work five years out. And so it decided to build a

utility computing environment that is differentiated with software, such as its homegrown variant of the Solaris Unix environment, and services, like market data streams, and then sell raw infra-structure to high frequency traders who did not want to do all of this work them-selves. Or could not.

And thus Lucera was born, and Cantor Fitzgerald is emulating online retailer Amazon a bit with its Amazon Web Services subsidiary by spinning off Lucera. There are plenty of com-panies that want to do high frequency trading, and Lucera has expertise in building HFT systems and networks. And moreover, there are applications that Cantor Fitzgerald does run on the Lucera cloud, such as its equity whole-sale desk and foreign exchange trading operations. Just as AWS customers help subsidize the cost of Amazon’s IT oper-ations, Lucera does the same for Cantor Fitzgerald.

“We decided to be absurd, but not absurd absurd and do so many things in hardware,” says Loveless with a laugh. “We are going to stay on an Intel-stan-dard chip. We are going to have extreme systems, but it is not going to be as fun as it was. It is going to be really, really fast, but not stupid, stupid fast. There is still going to be somebody somewhere who is faster than us at some things, but we are going to be able to do things at a price point that they can’t match.”

The basic business model is to buy iron in much higher bulk than any high frequency trading firm typically does and to leverage that economic might to not only get aggressive pricing on systems and storage, but also to get other things it needs. For instance, the Lucera systems employ 10 Gb/sec Eth-ernet adapter cards from Chelsio Com-munications, and Lucera got access to the source code for the drivers for these adapters because a lot of what the com-pany does to goose performance and reduce latency is to hack drivers for pe-ripherals.

Most of the time HFT applications are written in C, but sometimes you need even more performance and you

have to get even closer to the iron.“Most of the code is in C, and you

have to do nasty bits in assembler,” says Loveless. “It sucks, but that’s reality. Ev-ery high frequency trader on the planet writes stuff in assembler because any sufficiently advanced compiler is still not going to get the job done. The beau-ty of writing code for high frequency – and Donald Knuth would be horrified to hear this – is that there is no such thing as premature optimization. All optimization is necessary optimization. If you go through code and change it so it will drop five microseconds out of the runtime of that piece of code, you do that. It is totally worth it.”

The Lucera cloud has not given up on FPGA accelerators completely. The company has created what is called a ticker plant in the lingo, which is a box that consolidates the market data feeds from a dozen exchanges and publishes them in various formats for HFT appli-cations. These ticker plants cost on the order of $250,000 a pop, and you need two of them for redundancy.

Under normal circumstances, an HFT company would have systems of its own to so this, and generally they would be equipped with FPGAs to ac-celerate this feed consolidation. (The applications use this data to find the best buy and sell prices for an equity across those exchanges.) The reason this part of the HFT stack can stay in FPGAs is that the exchanges do not change their data formats all that often – perhaps once or twice a year – and so you can code your feed consolidation code in hardware. Coding market mod-els in hardware such as in FPGAs is no longer practical, as mentioned above, because these are changing constantly.

The applications that codify those models run on very fast Xeon ma-chines in the Lucera cloud, as it turns out. These machines are customized versions of systems designed by Scal-able Informatics, a server and storage array maker based in Plymouth, Michi-gan that caters to financial services, oil and gas, and media and entertainment companies that need screaming perfor-mance.

Page 12: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 12

Lucera is using a variant of Scalable Informatics’ JackRabbit server line for the compute portion of the cloud. The servers running in the Lucera cloud are based on the latest “Ivy Bridge-EP” processors from Intel, and specifically, they have two of the Xeon E5-2687W v2 chips designed for workstations, which run at 3.4 GHz. (You can read more about the new Xeon E5-2600 v2 processors at this link.) There are 44 servers in a rack, and each server has sixteen cores for a total of 704 cores per rack. The Lucera cloud has facilities in Equinix data centers in New York and London up and running today with 22 racks of machines in each facility, and another 22 racks are being put into a Chicago facility. That will be a total of 46,464 cores. Lucera chose the work-station versions of the Ivy Bridge chips because HFT workloads do not have a lot of heavily multithreaded code, and it turns off all of the power saving features in the chip to push clock speeds them up to 3.6 GHz or 3.7 GHz; sometimes, it can get a machine to behave depend-ably and predictably at 3.8 GHz. The machines also have high-end DDR3 memory that can be overclocked to 2.1 GHz instead of 1.67 GHz or 1.87 GHz. And it tunes up all of the caches in the machine, too.

Each Lucera server has a dozen flash-based solid state drives config-ured in a RAID 1+0 setup, which is two mirrors of five drives plus two hot spares. There are two disk controllers for redundancy. The machines also have

four 10 Gb/sec Ethernet ports, with two of them coming from Chelsio for very low latency work and two being on the motherboard and not quite as zippy.

And the funny bit is that this hard-ware will all be tossed out in a year and a half or less.

“In high frequency trading, you have got to be on the edge,” Loveless explains. “You amortize your hardware costs over 18 months, and if you actually get 18 months out of something, well, that’s just awesome.”

The Lucera cloud is based on a cus-tom variant of the open source SmartOS operating system, which was created by Joyent for its public cloud. Joyent took the open source variant of the Solaris operating system controlled by Oracle and added the KVM hypervisor to it. This KVM layer allows for Windows or Linux to be run on the cloud if neces-sary, but Loveless says most customers run in bare-metal mode atop SmartOS for performance reasons.

The Lucera SmartOS has its own orchestration engine to manage work-loads on its cloud, and it uses Solaris zones to isolate workloads from each other on the cloud. Because HFT ap-plications are so latency sensitive, slices of the machines are pinned to specific processors and the memory that hangs off those processors is pegged to those processors. Network interrupts for a network card are tied specifically to a socket as well. The underlying file sys-tem for the cloud is ZFS, also created by Sun and also open sourced before

Oracle acquired Sun more than three years ago.

Networking is probably the most challenging part of running a high fre-quency trading cloud, and Lucera has worked with Scalable Informatics to create a custom router to link market data feeds to its clouds and has a home-grown software-defined networking stack to make use of it, too.

“There isn’t just one source of in-formation and that makes it hard,” says Loveless. “Take foreign exchange, for example, where you have to be connected to hundreds of networks in order to make a decision. And so if you are talking about building a utility computing environment, you are going to have to be able to support at the edge of the utility environment hundreds and hundreds of private networks. This is not like Amazon Web Services where you have the Internet and maybe one or two private networks. Here, you have literally hundreds of pieces of fiber ter-minating at the edge and you need to manage that. So we wrote our own soft-ware-defined network that does that, and it runs on custom routers that are based on X86 processors.”

Not many enterprise customers build hot-rod servers and custom rout-ers and tweak up their own variants of an open source operating system, of course. But in businesses where speed and low latency matter, this practice could become more common in the fu-ture. •

Page 13: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 13

Datachambers, a datacenter operator that offers co-location

and hosting services across North Carolina, has tapped Ex-

treme Networks to be its Ethernet switch provider as part of a

build out of its operations. The move displaces gear by Cisco Systems.

Thanks to the partnership between Datachambers and Extreme, the

datacenter operator is also getting its hands on two of the new Ex-

treme Summit X770 40 Gb/sec Ethernet switches, announced today,

to give them a test run.

Not that Datachambers is expecting to move to 40 Gb/sec switching as part of its current datacenter build out. EJ Schwartz, director of solution engineer-ing, tells EnterpriseTech that the com-pany can offer 1 Gb/sec or 10 Gb/sec to customers who co-locate their servers in its facilities and that demand is not yet there for 40 Gb/sec pipes. But Data-chambers is getting ready. “Six months from now, someone could write a killer app that requires that,” says Schwartz.

Datachambers is a twelve-year old datacenter operator that was founded by North State Telecommunications as it expanded from its telecommunica-tions business out into datacenter ser-vices. North State is a hundred years older than that and provides television, voice, and data services in the central of North Carolina.

Datachambers started out with a

15,000 square foot datacenter in Win-ston-Salem, and doubled that again three years ago when it was operating at 85 percent capacity, says Schwartz. At the same time as the Winston-Salem facility was running out of space, cus-tomers from that area wanted to have remote disaster recovery sites. So Data-chambers built a 30,000 square foot datacenter in Raleigh (shown above) after retrofitting a 50,000 square foot building.

Demand for services has compelled Datachambers to build its first data-center from the ground up rather than retrofit an existing building to house servers, storage, and switches, and it will be located in Charlotte, located near the Charlotte Douglas Airport and the North Carolina. The facility will weigh in at 50,000 square foot facility in Charlotte and is expected to open in

the third quarter of next year. The de-sign is being created now, but Schwartz says that it will be modular so it can be expanded easily. This has not been the case with the retrofitted buildings Datachambers has used in Winston-Sa-lem and Raleigh.

Once Datachambers committed to building a new datacenter, the company figured it was time to look at multiple vendors for the gear inside of the da-tacenter. The company’s value-added reseller, Blue Door Networks, strong-ly suggested that Extreme be brought in to bid against Cisco, and Schwartz tells EnterpriseTech that this had not occurred to the company because “we thought that they were out of our price range.” Extreme competed hard to win the deal, and not only is Extreme gear going into the new Charlotte datacen-ter, but the core and edge switches in the Winston-Salem and Raleigh data-centers have also been replaced with Extreme gear.

Specifically, each facility has two BlackDiamond 8806 modular core switches, which have 10 Gb/sec and 40 Gb/sec line cards. Each switch has 1.95 Tb/sec of aggregate switching band-width each and can handle 1.42 mil-lion packets per second of forwarding at Layer 2 and Layer 3 of the network

Regional Datacenter Bracing for 40GbE Demand (Originally published November 2013 in EnterpriseTech)

Page 14: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 14

stack. For the edge, Datachambers has an inner edge that its customers see and an outer edge that links to the data services from carriers Verizon, Level 3, DukeNet, Time Warner Cable, and AT&T. The outside edge is comprised of Summit X480 switches from Extreme, which have 48 Gigabit Ethernet ports and six 10 Gb/sec ports. These switches can be stacked and managed as a single unit with up to 384 ports. The inside edge that reaches out to the servers in each facility is made up of Summit X460 servers, which cram 52 1 Gb/sec Ethernet ports into a 1U enclosure and up to 416 ports in a stacked switch. The BlackDiamond 8806s sit between the inside edge and the outside edge, link-ing the two.

This week, concurrent with the launch of the Summit X770 switches, Datachambers is getting two of the de-vices from Extreme for testing purposes that the network admins can play with. The Summit X770 is designed explicitly to allow companies to support 10 Gb/sec connectivity today but to switch to 40 Gb/sec in the future while leaving the switch in place, and that is why it is interesting for Datachambers and similar customers who want to bridge

between the two bandwidths.“The 10GE port is really starting to

take off on servers,” says Todd Acree, director of product management at Ex-treme. Most of the tier-one server mak-ers have a 10 Gb/sec LAN-on-mother-board option, either a direct port or one that snaps in via a mezzanine card, and 10 Gb/sec ports are already mainstream in converged systems as well as in the infrastructure underpinning public clouds. Perhaps more significantly, 40 Gb/sec is starting to get some ramp, too, with Mellanox Technologies ship-ping ConnectX-3 adapters. So the time is right to offer a 40 Gb/sec switch, ac-cording to Acree, but it has to be one that can also support 10 Gb/sec ports as a stopgap.

Rather than have two different switches using the same ASIC and of-fering different port types and counts, as many of its competitors do, Ex-treme has one switch with 40 Gb/sec ports and is encouraging customers who want 10 Gb/sec ports to use cable splitters, which impose no performance penalty in terms of port-to-port latency. This is, in fact, the way Extreme expects most customers to use the new X770, which can support up to 104 10 Gb/sec

ports with splitters. That is the limit of the Broadcom “Trident-II” switch ASIC used in the chip, which grabs 24 ports for its own internal use. Pull the split-ter cables out, and you have a 32 port 40Gb/sec switch, and that is a large number of ports for a 1U enclosure.

The Trident-II offers around 600 nanoseconds of latency on port hops, which is 30 percent lower than the X670 switch the X770 replaces. The ASIC has support for the VMware VX-LAN and Microsoft NVGRE overlays for Layer 3 networks to convert them into giant virtual Layer 2 nets.

The base X770 switch is expect-ed to sell for between $40,000 and $45,000. An additional set of software capabilities such as OpenFlow, TRILL, MPLS, and IEEE 1588 time stamping adds around 20 percent more to the cost of the switch. The X770 will start shipping in January.

Across says that about 70 percent of Extreme’s business comes from sell-ing into enterprise datacenters, with the remaining 30 percent coming from service providers like Datachambers as well as supercomputing centers in gov-ernment and academia. •

Page 15: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 15

You want your healthcare insurance provider to run a lean and

mean IT shop, and Blue Cross Blue Shield of Alabama is al-

ways looking at new technologies to make its operations more

efficient. The latest ones being adopted by the healthcare company are

new networking gear for Hewlett-Packard’s BladeSystem blade servers

and shifting from VMware’s ESXi server virtualization hypervisor to

Microsoft’s Hyper-V alternative.

The Blue Cross Blue Shield associ-ation provides healthcare to over 100 million people in the United States, and there are 37 different organizations that administer their services, generally at the state level. Each BCBS associate runs independently of each other, but they get common services – for which they collectively pay hundreds of mil-lions of dollars – from the association, such as governance or networking cross-state coverage. The BCBS associ-ates tend to have backend systems run-ning on IBM mainframes, but beyond that, the associates tend to pick and choose their own platforms.

Blue Cross Blue Shield of Alabama was an early and enthusiastic adopter of blade servers and their integrated virtual networking, and to this day still keeps on the leading edge of technol-ogies developed by Hewlett-Packard. The organization received serial num-bers 1 and 2 of the c-Class blade enclo-sures from HP, Russ Stringer, server engineer and virtual architect at BCBS

of Alabama tells EnterpriseTech. These days the organization has 24 blade en-closures packed full of blades that run various software that wraps around the mainframe systems that do claims pro-cessing.

Like other BCBS associates, the one in Alabama is chartered by the state to provide healthcare services to 2.1 million residents as well as another 900,000 people who live outside of the state. The organization is not designed to make a profit, but rather to use as much of its funds that it gets from pre-miums to provide healthcare services. Stringer says that IT is a big part of lowering healthcare costs in Alabama, and is proud of the fact that the auto-mation that the organization has creat-ed allows for more than 90 percent of claims to be processed accurately and reliably without any human interven-tion. The organization, which has in ex-cess of $4 billion in revenues, employs about 4,000 people, and roughly 400 of them are developers who maintain

the homegrown applications that make this possible. The vast majority of those applications are coded in Java, as is the case in many large enterprises.

Like many mainframe shops, BCBS of Alabama has long since opted to use IBM’s WebSphere Application Serv-er as its Java middleware platform on the mainframe, but over the years the vast majority of WebSphere instances now run on a much larger complex of outboard X86 servers. This is one way to lower costs. So is having a very low turnover rate in the IT department, says Stringer, who has been there since 2003 and says he is one of the newbies still.

Another way to cut costs is to move to converged infrastructure and to take a “virtual first” attitude to middleware and applications, strategies which BCBS of Alabama implemented in 2003 because when he joined the orga-nization, “there was zero U of space in the datacenter and I could not draw one more watt of power out of it or put one more BTU of heat into it.” Outside of the IBM mainframes, BCBS of Alabama was a Compaq server shop before HP acquired it in 2001, and because of its power, cooling, and space constraints, it jumped to the front of the line with the BladeSystem c-Class. Those first two blade enclosures, by the way, are used in application testing even though they are eight years old.

Today, BCBS of Alabama has a to-

Blue Cross Blue Shield Streamlines Networking, Virtualization(Originally published May 2014 in EnterpriseTech)

Page 16: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 16

tal of 384 blade servers running in its 24 BladeSystem enclosures, all of them running Windows Server. One third of the nodes in these enclosures get up-graded each year. At the point three years ago when the organization built a new datacenter in Birmingham (shown in the opening image at the top of this article), about 65 percent of the nodes were equipped with VMware’s ESXi hy-pervisor and its vSphere management tools and the remaining ones were con-figured with Microsoft’s Hyper-V and Systems center analogs.

The reason there was any Hyper-V at all in the stack was that WebSphere didn’t like ESXi. “Any time we tried to VMotion live migrate it, the WebSphere that we were running would just throw up and everything would die,” explains Stringer. “We learned a hard lesson and we decided to put WebSphere on Hyper-V and keep everything else on ESXi. But we were also, in 2012, look-ing at using VMware’s Site Recovery Manager, and for us, the licensing costs were going to be too expensive. We had to buy the licensing for Windows Server 2012 Datacenter Edition anyway, so we did some testing, and we told VMware you’re a great partner but we found somebody new.”

Instead of paying for ESXi, vSphere and Site Recovery Manager, BCBS of Alabama is paying for Datacenter Serv-er and System Center, which it was go-ing to buy anyway. Once again demon-strating the power (both technical and financial) of software bundling. “I can buy a lot of memory with that money,” says Stringer, referring to the money the organization saved.

Stringer says that Hyper-V and ESXi deliver about the same number of virtual machines per physical server, so that was not a reason to move. With the ProLiant Gen7 blade servers, BCBS of Alabama had nodes with two six-core Xeon E5-2650 v1 processors with 256 GB of main memory, and these nodes supported somewhere between 10 and 15 virtual machines. With the ProLiant Gen8 machines, the organization shift-ed up to eight-core Xeon E5-2670 v2 chips and put 384 GB of memory on

the nodes, yielding somewhere be-tween 30 and 50 VMs per node. In the fourth quarter of this year, when Intel is expected to get a “Haswell” Xeon E5 v3 into the field and HP is expected to get is ProLiant Gen9 nodes out, String-er says he will do a refresh on a third of the nodes and probably put 512 GB of memory on each one, allowing him to push the VM count up even high-er in the same physical footprint and, perhaps, even buy fewer servers if the workloads do not demand it.

Don’t think for a minute that BCBS of Alabama doesn’t look at every com-ponent in its datacenter this way. It does, and it has some Unified Comput-ing System blades in the datacenter, running its call center software, just to keep HP on its toes. It also uses Cisco’s Nexus switches in the top of its racks, linking the BladeSystem enclosures to the mainframes and to each other. Ev-ery year, Stringer takes a look at AMD alternatives to Intel processors as well.

Like many enterprises, BCBS of Alabama has been a Cisco networking shop for a very long time. The first two BladeSystem enclosures had Cisco MDS storage area network switches, and the next three also had Cisco switches as well. Seven of the enclosures are the new “platinum” variants, which have enough internal networking to drive 40 Gb/sec links to nodes but probably cannot, in Stringer’s estimation, drive 100 Gb/sec links.

Since implementing the ProLiant Gen7 blades, the organization has put the Virtual Connect virtual switching into the blade enclosures. Specifically, the top-of-rack Nexus switches reach down into the enclosures using Fibre Channel over Ethernet (FCoE) to hook to external storage arrays. This means that BCBS of Alabama will be able to get rid of the MDS switches that are used to link out to storage.

“We are trying to simplify and get everything as clean as possible,” says Stringer. “I want as few different wires as possible.” One of the key tools is the Virtual Connect Enterprise Manager, which is used to set up the networking for both physical nodes and virtual ma-

chines across the multiple blade enclo-sures, all from the Holy Grail of a single pane of glass.

At the moment, BCBS of Alabama is beta testing the new FlexFabric 20/40 F8 module, which is a networking de-vice that plugs into the Virtual Connect hardware and that HP just announced in a blog post this week. The FlexFab-ric 20/40 F8 modules are installed in redundant pairs in the BladeSystem enclosures and provide a mix of down-links to server nodes that are adjust-able. The ports can be set up as eight Ethernet ports, six Ethernet and two Fi-bre Channel ports, or six Ethernet and two iSCSI ports. The module has twelve uplinks – that’s eight Flexports and four QSFP+ ports, and with splitter cables you can double up the port count. This module has 1.2 Tb/sec of bridging fab-ric capacity and allows up to 255 virtual machines on the same physical node to access different storage arrays over the Ethernet fabric. The FlexFabric mod-ules can be stacked and run as a single virtual switch across up to four Blade-System enclosures, allowing any server in those enclosures to access any uplink in the FlexFabric stack.

In conjunction with the new Flex-Fabric module, HP has launched two new adapters for the server nodes. These include the FlexFabric 630FKP, which has two ports running at 20 Gb/sec and which can be subdivided into four 10 Gb/sec ports on the node. At the moment, this is only available for the ProLiant BL460c, BL465c, and BL660c blades in the Gen8 family. The FlexFab-ric 630M is a mezzanine adapter that provides two ports running at 20 Gb/sec as well and can be subdivided into four ports. There is enough bandwidth to stream 10 Gb/sec Ethernet and 8 Gb/sec Fibre Channel over a single port, and HP says the new FlexFabric devices have 73 percent lower latency than pri-or Virtual Connect devices, at around 1 microsecond for an Ethernet port and 1.8 microseconds for a combined Ether-net/Fibre Channel port across the Flex-Fabric-20/40 F8 module. •

Page 17: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 17

The OpenDaylight consortium is trying to do for software-defined

networking what Linux did for operating systems. And that is to

provide an open source set of tools for virtualizing networks that

is created under a collaborative, and yet organized, development effort

with as many industry luminaries and IT vendors behind it. This week,

OpenDaylight gave the first glimpse of the first release of its SDN stack,

code-named “Hydrogen” and comprised of bits of software donated by

various members of the consortium.

Here’s the problem that SDN is trying to solve and why so many peo-ple are making noise about it. Switches and routers are like the mainframes of days gone by, with command-line wiz-ards with deep knowledge of esoteric software knowing how to configure the settings to allow devices to talk to each other in a secure fashion. Con-figuring these devices is labor-inten-sive, requires specialized knowledge, and cannot happen fast enough to deal with rapidly changing network traffic. So you often end up overprovisioning your network to deal with peaks.

So SDN wants to take Layers 2 through 7 in the network stack – from

switching all the way through routing and on up to several application layers – and virtualize them. By doing this, these functions become programma-ble and automatable, just like a virtual machine running on a hypervisor on a physical server. The SDN stack has an out-of-band controller that aggregates the control planes of switches and rout-ers (in both the physical and virtual va-rieties) and can reconfigure them on the fly.

In many cases, the OpenFlow pro-tocol developed by researchers at Stan-ford University are at the heart of the SDN stack, but Cisco Systems and Hew-lett-Packard, which are the three big-

gest suppliers of switches and routers in the world, are taking their existing network controllers and opening them up with APIs to create SDN controllers. Juniper Networks bought Contrail Sys-tems to get its own SDN stack. Contrail, founded in early 2012, was startup that was working on its own SDN control-ler – one not based on OpenFlow but a number of existing network protocols – and that is why Juniper was willing to spend $176 million to acquire the com-pany on the day it was set to uncloak from stealth mode last December. As we report elsewhere in EnterpriseTech, Juniper has just released a commer-cial-grade version of its Contrail Con-troller as well as an open source version that is separate from the OpenDaylight project, called OpenContrail.

Ironically, Cisco, Juniper, and HP are members of the OpenDaylight proj-ect, which is running out of the Linux Foundation just like the Linux operat-ing system kernel. Network equipment makers Brocade Communications, IBM, NEC, Arista Networks, and Er-icsson have joined the effort. VMware, Citrix Systems, Red Hat, and Micro-

OpenDaylight Lifts the Veil on ‘Hydrogen’ SDN Software Stack(Originally published September 2013 in EnterpriseTech)

Page 18: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 18

soft, which all have stakes in extending server virtualization with network vir-tualization, are also part of the project, and SDN startups PlumGrid and Nuage Networks joined early. Intel and Hua-wei Technologies, which have big stakes in networking hardware, also joined up after the project was founded back in April. Big Switch Networks, which had initially joined, was irritated that Cisco’s controller code was picked over its own and left the group in June.

Back in April, when the project was launched, OpenDaylight said that its SDN framework and the plug-ins for it would be open source and developed under an Eclipse Public License v1.0. To make the controller absolutely por-table, it would have hooks into special-ized hardware and would be written in Java. Python was chosen for graphical elements of the stack.

Importantly, the OpenDaylight framework includes a Service Abstrac-tion Layer, which is akin to a hypervi-sor on a server that will allow for other interfaces and protocols to be plugged into the OpenDaylight controller. Above the controller is a set of REST APIs that in turn reach up into network orches-tration and abstraction applications.

OpenDaylight has given a name to the first release of its SDN stack, which is called “Hydrogen” after the first atom the universe created. The software will support physical switches through a va-riety of protocols as shown above in the block diagram as well as Open vSwitch virtual switches, which were created by Nicira a few years back but are now con-trolled by VMware, which paid $1.26 billion for Nicira back in July 2012 to get Nicira’s virtual switch as well as its NVP Controller. Other virtual switches

can plug into the OpenDaylight con-troller as well.

The SDN stack that was previewed last week also includes two network services that run above the controller, and many more will no doubt follow. The first is called the Affinity Metadata Service, which helps with network pol-icy management. The second is called Defense4All and it is a network appli-cation that provides protection against distributed denial of service attacks on networks.

The plan originally was to get the OpenDaylight code out in the third quarter, but now the target date is De-cember. It will be interesting to watch how the various network giants all dance around this project, trying to commit code and influence it. •

Page 19: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 19

C isco Systems has taken a different approach to software-de-

fined networking, by baking some of its features into its Nex-

us switching hardware to combat OpenFlow and more generic

switches with what the company will contend is better engineering.

And, as you might expect, with that new switching hardware comes a

new protocol, called OpFlex, which Cisco divulged at the Interop con-

ference this week.

The company also previewed new modular and top-of-rack switches in the Nexus line to push 40 Gb/sec net-working deeper into the datacenter.

In a blog post by Shashi Kiran, se-nior director of datacenter, cloud, and open networking at Cisco, explained that the OpFlex protocol for SDN was meant to parallel the distributed con-trol that Cisco has put into its Appli-cation Centric Infrastructure (ACI) ar-chitecture for switches. With ACI, the idea is to have Application Policy Infra-structure Controller (APIC) embedded in the devices and then have a policy manager talk to physical and virtual switches, routers, and network applica-tions running up in Layers 4 through 7 of the network stack to determine their bandwidth needs and get them from a

pool of bandwidth available on the net-work.

Instead of having data forwarding done by a central controller, as is done with OpenFlow setups and, indeed, with Google’s “Andromeda” SDN stack, which it uses internally and which it has just exposed on two regions in its Cloud Platform public cloud, the APIC approach puts intelligence in all of the network devices and makes them aware of the application-level policies that are managed by APIC. The policies are cen-tralized, but the control plane is not, in essence.

Cisco says that with other SDN ap-proaches, whether they are based on OpenFlow or other proprietary meth-ods, the network devices are dumbed down and all control is done centrally.

That means that the network is bottle-necked by the capacity of that control-ler to update forwarding tables to shift traffic on all of the devices on the net-work.

While Cisco may have 65 percent market share or so in datacenter switch-ing, the company knows that it needs to interoperate with other switching and routing gear and, perhaps more impor-tantly, a slew of network software pro-viders, cloud controllers, hypervisors, virtual switches, and so on that are spread around the datacenter. That is what the OpFlex protocol is all about. Technically speaking, this is a south-bound protocol, which will link the APIC controller built into Cisco’s Nexus switches and allow for these devices to provide policy control for physical and virtual switches, routers, and network services that are not made by Cisco.

To that end, Cisco is submitting the OpFlex protocol to the IETF standard-ization process, and is working on an open source OpFlex agent that vendors can embed into their devices and soft-ware so they can take their marching or-ders from an APIC-enabled box. Micro-soft, IBM, Citrix Systems, and SunGard

Cisco Counters OpenFlow SDN with OpFlex, Updates Nexus Switches(Originally published April 2014 in EnterpriseTech)

Page 20: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 20

Availability Services have all been work-ing with Cisco on the OpFlex standard. Microsoft, Citrix, Red Hat, and Canon-ical plan to add support for OpFlex into their virtual switches (which get tucked up inside of hypervisors), and IBM, F5 Networks, Embrane, and AVI networks are all currently planning to embed this OpFlex agent into their various prod-ucts, too. Cisco is also working with the OpenDaylight open source SDN project to get OpFlex embedded in the future “Helium” release of that stack.

The OpFlex protocol is currently supported on the Nexus 1000V virtu-al switch, the Nexus 7000 and 9000 switches, and the ASR 9000 routers from Cisco.

In related Nexus switching news, Cisco has rolled out two new modular switch enclosures and one new line card in the Nexus 9000 line. The Nex-us 9000, you will recall, was the first switch to have support for the APIC built in when it was announced last No-vember. At the time, Cisco was shipping an eight-slot chassis, the Nexus 9508.

The top-end Nexus 9516 is a 21U rack chassis that has room for sixteen line cards. The Nexus 8516 uses a mix of Cisco’s homegrown ACI Leaf Engine (ALE) and ACI Spine Engine (ACE) custom ASICs as well as Trident-II

ASICs from Broadcom. The line card has two or four ASICs each, and Cisco is being cagey about the mix; it has 36 ports running at 40 Gb/sec. Cisco wants to make it easier to go to 40 Gb/sec at the aggregation layer without having to use oversubscription, and that is why this line card only costs $45,000. That is less than 1.5X the cost of a 10 Gb/sec line card. (That may say more about Cisco’s 10 Gb/sec pricing than it does about its 40 Gb/sec pricing, of course.) Loaded up, the Nexus 9516 has 60 Tb/sec of aggregate switching bandwidth and has a total of 576 ports running at 40 Gb/sec. With cable splitters, you can convert that to 2,304 ports running at 10 Gb/sec speeds. The switch consumes 11 watts per 40 Gb/sec port, which is a little bit on the warm side. The Nexus 9516 will ship in the middle of the year; pricing for the chassis was not set yet.

At the low end, Cisco has deliv-ered the expected Nexus 9504 modular switch, which uses the existing 10 Gb/sec line card as well as the new 40 Gb/sec one mentioned above. As the name suggests, this 7U enclosure has room for four line cards, which gives either 144 ports running at 40 Gb/sec or 576 ports running at 10 Gb/sec. In this device, a 40 Gb/sec port will average 14 watts of power. The line cards use

the same mix of Cisco and Broadcom ASICs. The 9504 will be available this month.

But wait. There’s one more thing. Around the middle of this year, Cis-co will ship the Nexus 3164Q, a new switch that will cram 64 ports in a 2U fixed form factor. Those ports will run at 40 Gb/sec, and with cable splitters you can have 256 ports running at 10 Gb/sec speeds.

The Nexus 3146Q is based on Broadcom’s Trident-II ASIC and de-livers 5.12 Tb/sec of aggregate switch-ing bandwidth; it is also based on the streamlined variant of the NX-OS net-work operating system that debuted last fall with the Nexus 9000 and it supports Linux containers for network function virtualization (NFV), which is just a funky way of saying running network services that are normally on an external appliance inside of a virtu-al machine or container on the switch. The switch has a 48 MB buffer and customers will be able to start order-ing it soon. Cisco has not yet set prices for it. The Nexus 3146Q does not have support for ACI and cannot be used as an APIC in the network. Presumably, however, Cisco will have support for its own OpFlex agent in the device at some point. •

Page 21: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 21

When you look at a switch, you are looking at the vestiges of

a proprietary past that many of the largest companies in

the world want to do away with. Intel and Broadcom sup-

ply most of the Ethernet switch ASICs used for top-of-rack switches

today, and hyperscale datacenter operators and large enterprises have

been pushing for switch makers to let their hardware loose and allow

it to run different – and maybe even open source – network operating

systems.

Dell is the first of the major switch vendors to move in that direction,

and it takes a certain amount of bravery to do so. When IBM ceded

the PC operating system to Microsoft on the original PC, look at how

that turned out for Big Blue. IBM is out of the PC and its related server

business, and Microsoft is minting billions with its Windows Server op-

erating system.

Dell is obviously hoping that by giv-ing customers a choice of network oper-ating systems it can goose its hardware sales against rivals Cisco Systems, Ju-niper Networks, Hewlett-Packard, and others who will not necessarily be keen on opening up their network gear to ri-val software. Until they absolutely have to, of course.

Open networking, as its champi-on Facebook has made clear, is about

more than being allowed multiple op-tions for operating systems on a switch or a router. It is also about being able to make changes to these software en-vironments and to work collaboratively with vendors and other users to more rapidly implement changes.

“As these silos start to break down, Dell is in a position to provide a sin-gle point of contact without necessarily locking customers in,” Arpit Joshipura,

vice president of product management and m for Dell’s networking division, explains to EnterpriseTech. “This will allow for very rapid innovation based on standard APIs. You don’t have to wait for a black-box vendor to come up with features. If they code to the Linux APIs, they can add things like support for Chef and Puppet. We find that the early adopters in this space – the hy-perscale cloud operators, the big banks – with skilled IT staffs are ready to do this and carry forward this model.”

Force 10, the networking company that Dell bought for an undisclosed sum back in July 2011, was one of the first switch makers to go to merchant silicon rather than using homegrown chips. (That was around five years ago.) So it is not a surprise that Dell’s networking division is the first of the tier one switch players to embrace open networking. As part of the Open Compute Summit ex-travaganza in San Jose, Dell announced that it was partnering with Cumulus Networks to put the company’s Open Network Installer Environment (ONIE) on its switches and also would allow for Cumulus Linux, an alternative to Dell’s own Force 10 OS, to be installed on two of its most popular switches.

Dell Bares Switch Metal to Other Network Operating Systems(Originally published January 2014 in EnterpriseTech)

Page 22: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 22

These include the S4810, which has 40 10 Gb/sec ports and four 40 Gb/sec uplinks, and the S6000, which has 32 ports running at 40 Gb/sec. The ability to use the ONIE loader and other net-work operating systems will follow, says Joshipura, adding he fully expects some of them to be homegrown. (Incidental-ly, the Force 10 operating system is over a decade old and is not based on Linux, but rather a kernel and stack that was inspired by the IOS network operat-ing system from Cisco Systems. It was initially designed for supporting large-scale web applications and now has all of the REST APIs that network admins and application developers want.)

“This is the way the megascale da-tacenters have been behaving for quite some time now,” explains JR Rivers, co-founder and CEO at Cumulus Net-works. “Two of the major ones are at the point where they swap out software on top of hardware at will, and the other two have big programs in place to get to that same point. It is almost a fore-gone conclusion there. Big enterprises are getting to the point where they want the same things to occur, but they are looking for a viable supply chain.”

Cumulus Networks was founded in December 2010. Nolan Leake is one of the founders, and he has worked on early server virtualization projects at VMware, on server clustering at 3Leaf Systems, and research at Nuovo Sys-

tems, which was acquired by Cisco Sys-tems in May 2008 as a building block for its “California” Unified Computing System converged systems. Rivers, the company’s other founder, worked on the UCS platform as well at Cisco, and also designed network interface cards at 3Com back in the day and worked on Google’s custom networks for its massive datacenters between those jobs. Diane Greene, Ed Bugnion, and Mendel Rosenblum, the founders of server virtualization juggernaut VM-ware, were early investors in Cumulus Networks, and venture capitalists An-dreessen Horowitz, Battery Ventures, and Sequoia Capital have kicked in money as well. The company raised $15 million in its first round in August 2012 and $36 million in its second round just this month.

Cumulus came out of stealth mode in June 2013, but actually had custom-ers using the Cumulus Linux 1.0 net-work operating system back in October 2012. Release 1.5 of its net OS came out in July 2013, and last fall as part of the Open Networking initiative at the Open Compute Project, Cumulus open sourced the ONIE installer so switch makers could have a consistent way to deploy different operating systems on switches. By November 2013, Cumulus had over 10,000 switches running Cu-mulus Linux in production.

Release 2.0, which just came out

this month, has support for Broadcom’s Trident-II network ASIC and its on-chip VXLAN virtual overlay for Layer 3 networks. (VXLAN is, like NVGRE, a means of making collections of Layer 2 networks linked by Layer 3 switches or routers look like one big virtual net-work. By doing this, you can live mi-grate virtual machines across that vir-tual network.) Cumulus Linux supports earlier Trident and Trident+ chipsets from Broadcom as well as a number of other Broadcom ASICs such as Helix, Triumph, and Apollo.

Intel’s Fulcrum network ASICs are not yet supported by Cumulus Linux, and with Intel being a key supplier of merchant network chips, this is import-ant. It would be fun to see Cumulus Li-nux hacked onto Cisco or Juniper or HP iron as well, much as Linux was ported to RISC and mainframe machines in the late 1990s, often without the tacit approval of system vendors.

“Intel is driving hard in the market – clearly the type of partner we need to work with,” says Rivers. But he would not confirm that any such partnership is imminent.

Cumulus is, he said, working with a bunch of different ASIC partners to try to expand support for its network operating system. Switches made by Quanta, Penguin Computing, Acton, and Agema have been certified to run Cumulus Linux. •

Page 23: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 23

Having announced that it was joining the OpenDaylight Proj-

ect a month ago, Extreme Networks has come out with its

own software-defined networking strategy that will heavily

leverage that open source project and that will also make use of the

many assets developed by Extreme and Enterasys, which it acquired

last September for $180 million.

That acquisition essentially doubled the size of Extreme Networks

and made it into one of the several contenders that are trying to take

bites out of industry juggernaut Cisco Systems and, to a lesser extent,

Hewlett-Packard and Juniper Networks, who together have been the

dominant suppliers of datacenter switching gear. Given its substantial

presence in both datacenter and campus switching, Extreme Networks

is putting together an SDN stack that takes both into account.

SDN is still in its infancy at this point, but it seems to be inevitable giv-en the desire that all datacenters have to break the control plane from the data plane in their networks and to make traffic flow more programmable and the network more resilient because of this malleability. Extreme Networks is relying on the OpenDaylight SDN framework as a means of making net-works programmable and as a platform on which to create network applications that will be portable across multiple

stacks. This last bit is as important as the first bit.

Like other networking hardware and software suppliers, Extreme Net-works could have just opened up its APIs and created a software develop-ment kit that allows programmers to link into network functions directly. This is, for example, what Cisco is do-ing with its Open Network Environ-ment Platform Kit (onePK). Cisco has tied its Application Centric Infrastruc-ture (ACI) controller very tightly to its

switching hardware as well, and lots of software and appliance providers higher up in Layers 4 through 7 in the network stack are lining up to hook into ACI. Not everyone will go with the ACI approach, of course, and so Cisco is hedging its bets and has created the Extensible Network Controller (XNC) that is based on the OpenDaylight con-troller, with some extra features added in (including support for onePK).

Extreme Networks has been ship-ping switches that support the Open-Flow protocol at the heart of the Open-Daylight controller for the past two years, explains Markus Nispel, vice president of solutions architecture and innovation at the networking company. The company has shipped switches with an aggregate of over 10 million network ports that can be part of its SDN stack. The XOS network operating system (a hardened variant of Linux designed to control switches) has OpenFlow em-bedded in it as a southbound API and OneFabric Connect extends XOS with a northbound API that already supports network applications from over 40 soft-ware partners.

The issue that Extreme Networks is trying to solve is one that has hounded

Extreme Networks Takes the Open Road to SDN(Originally published January 2014 in EnterpriseTech)

Page 24: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON SDN in the Datacenter

SDN in the Datacenter 24

the IT industry time and again: appli-cation portability. And an analogy from the server market that we discussed with Nispel is appropriate. Before the various Unix operating systems came along, systems had their own run-times and APIs and applications were definitely not portable. An effort was made to equip Unixes with compatible APIs and file systems so that applica-tions could more easily be compiled on various platforms, but there was never true application portability. Then along came Linux, an open source operating system that was ported to all of the ma-jor server platforms, and application porting became just a little bit easier. OpenDaylight, as we have said many times before, want to be like Linux for the network stack, allowing for the soft-ware to control many different kinds of networking gear and allowing for net-work applications to run in conjunction with any ODL-compatible controller.

“We have seen an evolution of dif-ferent SDN architectures,” explains Nispel. “The initial controllers in the market were centered around the Open-Flow protocol from the Open Network Foundation, and while the protocol it-self is a key component in our architec-ture, it is not the only component. A few years ago when the ONF came up with OpenFlow, they really focused on the

protocol specifically, but they really left it wide open on how network applica-tions can talk to the OpenFlow control-ler. And this had led to a market that is pretty fractionalized, and applications running atop OpenFlow controllers to-day are not really portable across those controllers.”

The SDN strategy at Extreme Net-works is to participate in the Open-Daylight Project and bring its wired and wireless networking experience to the project to help shape the ODL stack. The company is also unifying its Xkit and OneFabric Connect API de-velopment efforts, which come from the two halves of the company, into one community called Connect Cen-tral with the ultimate goal of provid-ing an application marketplace for ap-plications to run along with the ODL stack. Extreme Networks is also going to stretch its SDN outside of datacen-ter switching to include branch office and campus switching, and wireless as well as wired LANs. Because Extreme Networks is supporting OpenFlow as the southbound API into the switches, its hardened ODL stack will be able to work with both its own gear and those from third parties that also support the OpenFlow protocol.

Generally speaking, any Extreme Networks switch sold since 2009

should be able to hook into the ODL stack. Nispel says that a typical da-tacenter customer upgrade cycle for switches has been on the order of five to seven years, and in some cases cus-tomers even hold out as long as eight years. But the advent of SDN and the need for more bandwidth by all kinds of applications might be accelerating the switch replacement cycle. OpenDay-light still has some maturing to do, as did the Linux operating system and as still does the OpenStack cloud control-ler, but with so many heavy hitters in the networking space lining up behind OpenDaylight momentum is building for this SDN stack.

And customers are responding as well. “Networking teams at customers in some cases have not been as inno-vative as they should be, but obvious-ly on the server side virtualization has changed the game in terms of the speed at which new services get deployed. The network teams had a hard time follow-ing, and they are coming to the reali-zation that they have to do something or the server administration teams are going to eat their lunch.”

Every case will be different and pricing for SDN software is evolving, but Nispel provides some guidance that suggests customers should expect to pay a premium of between 10 and 25 percent over their network hard-ware costs for SDN functionality. To come up with this figure, Nispel cites market statistics that suggest the mar-ket for SDN-capable switches will reach between $2.7 billion and $3 billion by 2017, and this represents about 10 per-cent of the aggregate revenue in the switching market. Of that SDN-capable slice, somewhere between $400 million and $600 million in revenues will be for the SDN controllers, network applica-tions that ride atop the controllers, and services. If a networking vendor is try-ing to charge more than that for SDN functionality, you are probably paying too much. •

Page 25: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON Extreme Networks

SDN in the Datacenter 25

Networks are complex beasts. Efforts to tame this snarl of

equipment, protocols, software and services have been under-

way for more than four decades with varying results.

Traditional network architectures, which got us through the client/server era, have become too complex and rigid to meet the needs of today’s IT environ-ment. Virtualization, virtual machine (VM) migration, dynamic and unpre-dictable traffic patterns, and the de-mands of Big Data are just a few of the forces driving the need for an updated networking paradigm.

One of the most promising solu-tions is Software Defined Networking (SDN), which evolved from research work done at UC Berkeley and Stanford University about six years ago. In the in-terim, SDN has become an established architecture for computer networking that allows network administrators to manage networks through abstractions of lower level functionality. It provides

a dynamic, manageable, and flexible platform that can handle today’s com-plex, compute-intensive, high band-width environments.

Although SDN addresses many of the problems faced by today’s networks, the challenge has been to implement the technology in a cost-effective way. What enterprises have been looking for is a realistic approach to SDN that can solve their networking problems with-out breaking the bank or requiring ma-jor additions to their IT organizations and the purchase of all-new network-ing equipment. Another priority is to achieve better control of the myriad network functions by automating as many as possible.

Despite flat or reduced budgets, IT departments are drawn to SDN because

it promises:• Enhanced management – Al-

lows organizations to evolve their networks to keep pace with today’s data-intensive, high bandwidth distributed compute and storage models

• Agility – Because SDN-based networks can automatical-ly modify themselves to meet changing demands, they are far more flexible than networks that are controlled manually.

• Reduced costs – Automating network functions also helps IT departments reduce OPEX costs and improve efficiency

There are a number of other rea-sons why SDN is taking center stage among enterprise networking solu-tions. It deals directly with shortcom-ings in today’s legacy networks that no longer meet the needs of the enterprise.

For example: • Upgrading network capabili-

Taming the Networking Tiger with Open SDN

Thank you for reading the Software-Defined Networking in the Datacenter compendium of features.

The following article is a paper from the underwriter of this compendium, Extreme Networks, titled,

Taming the Networking Tiger with Open SDN, which addresses the question of how best to comb through

the often-tangled network web.

We invite you to read it and feel free to contact Extreme Networks for any additional insight into how

their technology can help solve persistent technical computing problems related to software-defined net-

working in the datacenter.

Page 26: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON Extreme Networks

SDN in the Datacenter 26

ties using device-level manage-ment tools and manual pro-cesses is costly, time consuming and complicated.

• Networks are overly complex, difficult to manage, and are unable to scale, particularly in today’s virtualized data centers with their dynamic and unpre-dictable traffic patterns.

• The BYOD movement is now an entrenched part of the cor-porate network, mandating that IT accommodate these personal smartphones, tablets and notebooks while maintain-ing control over the network and meeting security require-ments.

• Enterprises are faced with the challenge of attempting to manage a complex network made up of disparate elements without resorting to ripping out their existing infrastructure and replacing it with a vendor specific, locked-in solution.

• Incorporating cloud services into the IT infrastructure is on the rise, but brings with it a host of requirements regarding security, compliance, and the elastic scaling of compute, stor-age and networking resources.

• Meeting today’s Big Data re-quirements requires the mas-sive parallel processing of datasets running into the peta-bytes on hundreds, thousands or even millions of servers, all interconnected.

Coping with this level of complexity has made legacy solutions not only ob-solete, but also a major liability in this era of distributed, hyperscale comput-ing environments. It’s no wonder that SDN is making inroads into once closed networking infrastructures.

SDN and Extreme NetworksSDN, based on open standards, is

a dynamic, cost effective, manageable and adaptable solution designed to deal with today’s dynamic and compute-in-tensive applications.

It allows network administrators to manage today’s complex networks by abstracting lower level functionality. This is accomplished by decoupling the network control and forwarding func-tions, allowing network control to be directly programmable and the under-lying infrastructure to be abstracted for applications and network services.

Extreme Networks is an example of one of those innovative companies that have turned to flexible and open SDN solutions to meet many of the challeng-

es described above. They have devel-oped evolutionary, open SDN platforms that promote community led innova-tion while avoiding solutions that only work with “greenfield” networking im-plementations or contain proprietary elements that promote vendor lock-in.

The Extreme SDN solution is an open platform based on current stan-dards. It uses a comprehensive, hard-ened version of the OpenDaylight (ODL) controller that includes network management, network access control, application analytics and wireless con-troller technology. This approach pre-serves the open API provided by ODL while extending data center network management, automation and provi-sioning using a single pane of glass.

Extreme’s SDN controller not only works with Extreme legacy switches, but also with third party switches and those based on the OpenFlow open source protocol. This approach allows customers to migrate their existing “brownfield” networks to Extreme SDN without expensive forklift upgrades. And, as an added bonus, Extreme has the well-earned reputation of provid-ing the best service in the industry. It is also lending support to an extensive user and developer community.

In short, the Extreme SDN solution can be summed up as:

• Simple – Provides end-to-end network automation that sim-plifies deployment and man-agement and lowers OPEX

• Fast – Provides all the speed and flexibility needed to provi-sion the network for any appli-cation – including Big Data

• Smart – Offers investment pro-tection by providing backward compatibility as well as the ca-pacity to add new capabilities developed by Extreme to meet the demands of today’s appli-cations

The Extreme SDN is part of the Ex-treme Networks Software Defined Architecture, which also includes: • Netsight, a simple network

management visibility solution• Purview, which provides a deep,

Page 27: Spotlight On Software Defined Networking in the Datacenter from

SpotlightON Extreme Networks

SDN in the Datacenter 27

www.extremenetworks.com

application-level awareness for network and business analytics

• EXOS, a unified OS for all Ex-treme hardware that makes it easy for all products to work together, and be managed and upgraded as required.

SDN and the EnterpriseEnterprises deploying Extreme’s

approach to SDN are working with a platform is truly open, not proprietary; one that is based on current standards like the OpenDaylight controller and OpenFlow open-source protocol. It can be implemented by anyone on any equipment that accommodates the open source standards. Any standards-com-

pliant 3rd party SDN app will run on Extreme equipment. Therefore there is no need to become locked-in to one vendor or write unique SDN apps for each solution. Any equipment that meets the ODL standard is compatible with the Extreme SDN platform.

Also high up on the benefits list is Extreme’s broad partner and developer ecosystem, which includes more than 40 technical partners. The company is also fostering an online development forum and ecosystem to create new apps, including offering prizes (and IP rights) to encourage developers to inno-vate using the SDN platform.

Other benefits associated with the Extreme SDN platform include:

• Improved application perfor-mance, faster workload provi-sioning, and network optimi-zation

• Enhanced security for network devices and data – in-flight and at rest

• Single pane of glass provides simplified and consistent user experience and reduced OPEX

• Fine-grained, pervasive and ac-tionable visibility into the net-work in order to make fast and intelligent decisions

• Compatibility that supports brownfield, multi-vendor mi-gration from on-premise to cloud solutions yielding higher ROI

• Exceptional support from a trusted vendor with a long his-tory of SDN platform develop-ment

Taming the Network TigerGiven the rise of Big Data, along

with complex high-speed, networks supporting today’s IT infrastructures, SDN is a solution whose time has come.

Extreme Networks is in the fore-front of providing SDN services with more than four dozen solution partners and over 10 million SDN-ready ports shipped to customers around the world. The company is leading the transition to open, standards-based and brown-field capable SDN. The tiger is being tamed. •

Extreme Networks’ Wireless Dashboard in action