21
FEBRUARY 2012 / VOL. 3 / N0. 1 evolution NETWORK BUILDING THE INFRASTRUCTURE TO ENABLE THE CHANGING FACE OF IT NEXT- GENERATION NETWORK MANAGEMENT TECHNIQUES New tools and techniques are giving IT organizations better visibility and control over their networks.

Networkvolut february 2012 / vol. 3 / n0. 1iondocs.media.bitpipe.com/io_10x/io_104158/item_537062/... · 2012. 5. 1. · opinions can be priceless, shedding light on which tech-nologies

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

  • f e b r u a r y 2 0 1 2 / v o l . 3 / n 0 . 1

    evolutionNetworkb u i l d i n g t h e i n f r a s t r u c t u r e t o e n a b l e t h e c h a n g i n g f a c e o f i t

    Next- GeNeratioN

    Network MaNaGeMeNt techNiques

    New tools and techniques are giving IT organizations

    better visibility and control over their networks.

  • network evolution e-zine • february 2012 2

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    By rivka Gewirtz LittLe

    Top 2011 Network Blogger Opinions: Do You Agree?EvEryonE’s got an opinion, but nobody expresses them quite like network bloggers. They spend their time poking holes in just about every new vendor technology. For networking pros, those opinions can be priceless, shedding light on which tech-nologies are viable and which yield little return.

    In 2011, our Search-Networking.com Fast Packet bloggers tackled data center and cloud networking, virtualiza-tion management and the network, OpenFlow and software-defined networking, 10 GbE optimi-zation and, of course, the sacred cow of networking, Cisco and its Catalyst 6500 upgrade and its new Cius tablet.

    Check out SearchNetworking.com’s top network blogger opinions from 2011: pA Data Center networking standard that's not so trILLingLast year we were supposed to bid farewell to the spanning tree protocol and say hello to the flat data center network. To do that, some said we should use the TRILL protocol, a data center networking standard that allows Layer 3 routing among Layer 2 devices.

    This would replace spanning tree and free up more Layer 2 paths, creating a friendlier environment for mas-sive VM migration.

    But Juniper Networks engineer and Fast Packet blogger, Anjan

    Venkatramani takes issue with this the-ory. In his Fast Packet blog “Why TRILL won’t work for data center network ar-chitecture,” Venkatramani said TRILL “ignores important trends, including the need for varying types of VLANs

    idealabWhere evolving network concepts come together

    Everyone's got an opinion, but nobody

    expresses them quite like network

    bloggers.

    http://searchnetworking.techtarget.com/news/2240036749/Why-TRILL-wont-work-for-data-center-network-architecturehttp://searchnetworking.techtarget.com/news/2240036749/Why-TRILL-wont-work-for-data-center-network-architecturehttp://searchnetworking.techtarget.com/news/2240036749/Why-TRILL-wont-work-for-data-center-network-architecture

  • network evolution e-zine • february 2012 3

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    idea lab

    for segmentation in cloud networks.” In fact, Venkatramani goes so far as to say that TRILL offers no answer for Layer 3 multi-pathing and actually fails at the multi-tenancy necessary for cloud networking. pnetwork Fabrics May not Be the End-AllTRILL is just one example of how net-working pros have gone data center network fabric crazy. Data center fab-rics aim to enable better management of network traffic in a virtual environ-ment, allowing engineers to manage multiple physical and virtual network components as one. So who wouldn’t love a fabric?

    OpenFlow developer and Nicira co-founder Martin Casado isn’t necessar-ily a fan. In his Fast Packet blog “With edge software overlays, is network fab-ric just for raw bandwidth?“ Casado said that network fabrics free up band-width, but added “in a world of edge software overlays, it’s possible that network fabrics don’t necessarily need to be so feature rich.” Instead, he said network fabrics should offer “dumb—but unified—bandwidth,” in addition to support of packet replication hard-ware, multicast management and QoS to avoid congestion. Other features like VLAN isolation and support for mobil-ity should be left to edge software.

    popenFlow Controllers Will only take shape if vendors Play niceOpenflow controllers and software-

    defined networking were the center of conversation this year in networking—and for good reason. Software-defined networking will eventually allow engi-neers to decouple the control plane of a network from the physical infra-structure, offering a real-time, holistic view of the network, and the ability to control how network path flows are distributed to individual switches and routers. It will also let engineers spin up instances of virtual network compo-nents on command.

    But there’s a problem, physical net-work vendors have to support this movement, and it’s unclear what their motivation will be to do so. After all, why support a new generation of soft-ware-based switches, load balancers and firewalls that rely only on com-moditized hardware? Where’s the profit?

    Openflow developer Big Switch Net-works tells SearchNetworking.com editor, Rivka Gewirtz Little in her blog “Openflow controllers could change networking forever...or not“ that user demand for flexibility will force vendors to comply, but that remains to be seen.

    pCan virtual Extensible LAn solve the vLAn Problem?VLANs have proven to be the toughest engineering challenge in data center and cloud networks, but Cisco and VMware say they have an answer in the new Virtual Extensible LAN protocol.

    With VXLAN, engineers can cre-ate thousands more VLANs that can

    http://searchnetworking.techtarget.com/news/2240111907/With-edge-software-overlays-is-network-fabric-just-for-raw-bandwidthhttp://searchnetworking.techtarget.com/news/2240111907/With-edge-software-overlays-is-network-fabric-just-for-raw-bandwidthhttp://searchnetworking.techtarget.com/news/2240111907/With-edge-software-overlays-is-network-fabric-just-for-raw-bandwidthhttp://searchnetworking.techtarget.com/news/2240110376/OpenFlow-controllers-could-change-networking-foreveror-nothttp://searchnetworking.techtarget.com/news/2240110376/OpenFlow-controllers-could-change-networking-foreveror-not

  • network evolution e-zine • february 2012 4

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    idea lab

    stretch across geographically dis-persed data centers. In a traditional setting, the IEEE 802.1Q VLAN specifi-cation provides only 4,094 VLAN iden-tifiers, while a top-of-rack switch may be connected to more than 40 serv-ers supporting multiple VMs. Using VXLAN, engineers can group VLANs together by application using a 24-bit identifier. In that case, as many as 16 million VNIs may be defined in any administrative domain, and each VNI may contain up to 4,094 VLANs.

    But Fast Packet blogger, Ivan Pepel-njak said VXLAN poses its own set of problems. In his Fast Packet blog “VX-LAN: Awesome or braindead?“ Pe-pelnjak said there is still no method for VXLAN encapsulation to commu-nicate with physical devices, such as switches, load balancers and firewalls.

    pvCloud and vEPA: neither Answer Is a WinnerLots of vendors say they have the answer for managing network traffic between virtual machines, but users don’t seem to think there is one solid strategy.

    VMware released the vCloud direc-tor, which implements switching, rout-ing, firewalling, NAT and even DHCP servers within the VMware framework. Cisco took a stab at virtualization net-working with its virtual switch, the Nexus 1000V. Other vendors are go-ing with Virtual Ethernet Port Adapters (VEPA), which pull all the traffic out of the hypervisor and let the first-hop

    switch handle all the necessary aspects of bridging, VLANs, routing, QoS and security.

    According to Fast Packet blogger, Ivan Pepelnjak, vCloud and Cisco’s VN Tag approach are both proprietary, which limits their use, and VEPA still needs hypervisor support. These limi-tations make all of the above solutions “half-baked.”

    pthe Answer to the top Five virtualization ProblemsNo one knows virtualization issues like a network manager. So Fast Packet blogger Josh Stephens—a network management guru at SolarWinds—spent his year blogging about the top five virtualization problems and how to solve them.

    Among his list of problems are: vir-tualization backup and recovery, VM sprawl, virtual capacity planning, VM stall and building a private cloud.

    But Stephens wins the SearchNet-working.com “Sunshine of the Year Award” because he never suggests that these problems are insurmount-able. In fact, he insists that with the right network management strategies, they can all be…well...managed.

    pneed 10 gbE optimization? Hardware Is not the solutionJust because 10 GbE promises speed, that doesn’t mean that it doesn’t pose latency problems. Until now, the solu-tion to latency was to add more band-width, which meant throwing hardware

    http://searchnetworking.techtarget.com/news/1280091200/Virtual-Extensible-LAN-Awesome-or-braindeadhttp://searchnetworking.techtarget.com/news/1280091200/Virtual-Extensible-LAN-Awesome-or-braindeadhttp://searchnetworking.techtarget.co.uk/news/2240102262/Top-5-server-virtualisation-problemshttp://searchnetworking.techtarget.co.uk/news/2240102262/Top-5-server-virtualisation-problems

  • network evolution e-zine • february 2012 5

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    idea lab

    at the problem. Fast Packet blogger, Michael J. Martin said there’s a better way to minimize 10 GbE latency.

    In Martin’s opinion, the latency prob-lem can be solved by dealing with ap-plication performance, and the first step to that solution is properly documenting applications and their server environments. An application dictionary would document server/service dependencies and transactional pro-cesses, for example.

    pWhen Physical security Breaches Are the ProblemNetwork engineers are always talking intrusion prevention and firewalling, but Fast Packet blogger Ethan Banks wonders if they’ve taken enough time to create a solid physical breach policy. In fact, he’s guessing not.

    In his blog “Are your physical secu-rity breach policies enough?“ Ethan warns about two policy-related prob-lems: the lack of good rules and weak enforcement. Both are easily address-able, but not unless network managers are willing to train staff and create pro-cesses specifically around physical se-curity breaches.

    pCisco Upgrades the Catalyst 6500…but some Wonder WhyCisco served up comfort food for the networking masses at Cisco Live in 2011, announcing a major upgrade to

    the Catalyst 6500. The upgrade includ-ed the Catalyst 6500 Series Supervisor Engine 2T, a 2-terabit card that triples the throughput capability of the 6500 switch from 720 Gbps to 2Tbps, and adds virtualization segmentation.

    It’s not that Cat users weren’t excited about the upgrade, but many wondered what this would mean for Cisco’s Nexus switching line. Fast Packet blogger, Greg Ferro criticized Cisco for pushing the Nexus switching line

    over the years and then turning back to a Catalyst upgrade. He said Cisco re-alized that selling Nexus would mean a full rip-and-replace of the chassis, which would lead customers to con-sider less expensive competitors.

    pCius vs. the iPad: Why Bother Comparing?Even though Cisco promised it would use this year to focus on its core busi-ness—network switching and rout-ing—the company went ahead with the launch of its Cius tablet. Cisco frames the Cius as an enterprise applications tool that integrates virtual desktop infrastructure and unified communica-tions, but Fast Packet blogger, Greg Ferro said he’d rather have an iPad. After all, most enterprise apps are now available for the iPad—including network management and desktop virtualization. n

    Cisco realized that selling Nexus

    would mean a full rip-and-replace of the chassis.

    http://searchnetworking.techtarget.com/feature/10-GbE-optimization-Hardware-alone-wont-helphttp://searchnetworking.techtarget.com/feature/10-GbE-optimization-Hardware-alone-wont-helphttp://searchnetworking.techtarget.com/news/2240084606/Are-your-physical-security-breach-policies-enoughhttp://searchnetworking.techtarget.com/news/2240084606/Are-your-physical-security-breach-policies-enoughhttp://searchnetworking.techtarget.com/news/2240079252/Catalyst-6500-Supervisor-2T-may-not-be-your-upgrade-answerhttp://searchnetworking.techtarget.com/news/2240079252/Catalyst-6500-Supervisor-2T-may-not-be-your-upgrade-answerhttp://searchnetworking.techtarget.com/news/2240079252/Catalyst-6500-Supervisor-2T-may-not-be-your-upgrade-answerhttp://searchconsumerization.techtarget.com/feature/Cius-vs-iPad-Can-Cisco-take-on-Apple

  • network evolution e-zine • february 2012 6

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    idea lab

    a BLoG By rivka Gewirtz LittLe

    Kicking IT Shop Sexism in the Ass (Men Can Do It Too!)A ConvErsAtIon in an online forum dur-ing the Women in Tech session at the Large Installation System Adminis-tration 2011 conference in Boston in December may have summed up the reason for ongoing sexism in IT shops.

    User 1: “Are you going to the Women in Tech session?” User 2: “No, I am not a woman.”

    When this was read out loud to the mostly female audience there was a collective groan. Why? By that point in the con-versation, many women had agreed that often men make things uncomfort-able for women in the workplace and don’t even realize it’s an issue.

    But almost every woman in the workshop had a war story to share. One had been told she couldn’t be a sys admin because she was too physi-cally weak to carry equipment. An-other feared backlash for taking mater-nity leave. One mentioned the all-time standard—cat calls on the job. Wow.

    It was suggested that IT shop sex-ism may be inherent to “IT engineer culture.” Engineers, after all, are known for their snark, need to outsmart each other, and what one woman dubbed “pub humor.” Turning that snark toward women may just be par for the course and shouldn’t be taken personally, an-other suggested.

    The problem with accepting this as part of the “culture” is that it implies women have to learn to live with it—that men don’t have to change. But that

    didn’t sit well with most in the room.

    “If we decide one of our organizational goals is to have more women and more diversity, but we want to be jerks, those are two irrecon-cilable goals,” said one

    panelist. Another added, “Pub humor should stay in the pub.”

    The bottom line is that organiza-tions must take specific steps to stem sexism in IT in order to diversify the workplace. They’ve got to do that by changing the way they talk to and about women. They must create meet-ing space that is conducive to equal sharing. And most importantly, they’ve got to offering mentoring to women, if for nothing else to increase the dis-mally low number of women in IT organizations. n

    Organizations must take specific steps to stem sexism in IT in order to diversify

    the workplace.

    http://www.usenix.org/events/lisa11/index.htmlhttp://www.usenix.org/events/lisa11/index.html

  • network evolution e-zine • february 2012 7

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    idea lab

    a BLoG post By shaMus McGiLLicuddy

    Expert Advice from Tom Kunath When military radar causes fixed broadband wireless interference

    Q: We established a RF link to connect two locations about 40 KM apart with fixed broadband wireless. It was work-ing fine for two weeks and suddenly it started going out for two to three hours and coming back on automatically. The ISP is telling us this is because of military radar operating in that area. If it is be-cause of a radar issue, what are the pos-sible solutions?

    A: It is quite possible that your ISP has landed on the root cause of your problem. Military radar and commer-cial Wi-Fi systems have been known to cause co-interference with each other, as the military C-band occupies frequencies in the range of 5290 to 5925 MHz, overlapping with the IEEE 802.11a wireless local area network standard allocations of 5150 to 5725 MHz. Likewise, the Military E, F, and G Bands occupy frequencies from 2 to 6 GHz, which overlap with the radio frequencies allocated to many Fixed Broadband Wireless Access (FBWA) solutions such as the InfiNet Wire-less InfiLINK products. The InfiLINK R5000-Omx, in particular, operates in the 2.3-2.6, 4.9-6.4 GHz ranges, leav-

    ing the possibility of co-interference with military systems that occupy fre-quencies in this range.

    The only way to be certain that inter-ference with military radar is causing the problems is to scan the radio en-vironment over an extended period of time to determine working frequencies so that collisions can be avoided. This can be accomplished with a dedicated spectrum analyzer, or by using a built-in spectrum analyzer that is included with most FBWA products such as the InfiNet Wireless R5000-Omz.

    To address the concerns of co-inter-ference, the Dynamic Frequency Selec-tion (DFS) specification was created to define a set of procedures to detect and avoid interference with Radar sys-tems operating in the 5 GHz range. If radar is detected the Wi-Fi device must alter the channel it is operating on, and ideally notify associated stations what channel they will be moving to.

    The following note from the techni-cal discussion forum on http://forum.infinetwireless.com/ suggests that DFS must be enabled through configuration. I would suggest verifying that this has been done on all devices as a first step.

    Radar detection is available if:

    DFS configuration dfs rf5.0 dfsradar dfs rf5.0 freq auto dfs rf5.0 cot 00:00 n

    Tom Kunath, CCie no. 1679, is a Solutions archi-tect in Cisco’s advanced Services Performance and validation test group.

    http://forum.infinetwireless.com/http://forum.infinetwireless.com/

  • network evolution e-zine • february 2012 8

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    don't bid farewell to netflow

    noW tHAt BLUE Cross Blue Shield of Minnesota relies on Software as a Service (SaaS) for core business processes like claims processing, network performance engineer Barry Pieper relies on deep packet analysis to tap inbound and out-bound Internet traffic in order to ensure his providers are delivering on their service-level agreements (SLAs).

    But it wouldn’t be worth using costly deep packet inspection for all of his network monitoring needs, so Pieper still turns to good old fash-ioned NetFlow analysis for a broad-er view of what’s happening on the network.

    Combined, Pieper uses a Network Instruments Gigastor appliance for packet capture, Compuware’s Van-tage network monitoring product—recently rebranded as Gomez Net-work Performance Monitoring—for analysis of that packet information, and then Fluke Networks’ Optiview

    NetFlow Tracker for NetFlow.“I use NetFlow a lot on our wide

    area network mainly because it works so well there,” he said. “Our branch offices are T1 and T3 links, so we would do software distribu-

    tions with Altiris or Tivoli and that would cause problems for people using our in-house applications or web apps. NetFlow could quickly tell us this was Altiris traffic and we could find out if people were streaming radio and things like that.”

    doN’t Bid FareweLL to NetFLowPacket analysis may provide a deeper look into the network, but NetFlow can offer a broader view. In fact, they work best together. By shaMus McGiLLicuddy

    Pieper relies on deep packet analysis to tap inbound and outbound Internet traffic to ensure his providers are delivering on their SLAs.

  • network evolution e-zine • february 2012 9

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    don't bid farewell to netflow

    Pieper is not alone in includ-ing NetFlow in a next-generation combination of monitoring tools. NetFlow monitoring may not always get the respect it deserves from the network management community, but it can alert engineers to band-width hogs or anomalous behavior, and NetFlow v9 allows users to pull even more data from flow records.

    What’s more, while packet stream monitoring tools go deeper, allowing network engineers to dig into exact-ly what is happening across the wire, continuous packet monitoring and analysis is not cheap. Probes and taps are expensive, and storing the data collected can be pricy, par-ticularly for larger companies that are producing many terabytes a day. Therefore, most enterprises can typ-ically only monitor packet streams in select, critical locations on the network, offering only a narrow view of the network at a time when many enterprises are clamouring for more and more visibility.

    “The amount of visibility organi-

    zations need to totally quantify how their applications and infrastructure is running continues to increase,” said Brad Reinboldt, senior product manager for network monitoring and analysis vendor Network Instru-ments. “There can never be too much information.”

    For that reason, Reinboldt has seen increased use of NetFlow mon-itoring by his customers.

    “Based on what we talk to our customers about, 25% to 50% of them do at least some level of flow monitoring as part of their overall monitoring solution,” he said. “What flow technology can offer you is a broader perspective.”

    NeTFlow For Broader VisiBiliTyFor many network teams, NetFlow offers enough information to han-dle about 90% of their problems, and then they turn to deeper tools for the other 10%, said Jim Frey, research director for Enterprise Management Associates.

    “I have talked to a lot of folks who use packet instrumentation in important parts of their network. Then they use NetFlow to get a sense of what’s going on in remote sites,” said Frey.

    Everett McArthur, a tier-three enterprise network support engi-neer at Texas Tech University Health Sciences Center, monitors his network with a combination of

    NetFlow monitoring may not always get the respect it deserves from the network man-agement community.

  • network evolution e-zine • february 2012 10

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    don't bid farewell to netflow

    NetFlow and packet monitoring. While his packet capture technol-ogy is instrumented to collect traffic in specific areas of the network, he can turn on NetFlow in any loca-tion at any time when he needs to troubleshoot something.

    Recently staff at a remote clinic 400 miles away from McArthur’s Lubbock, Tex.-area location com-plained of bandwidth saturation. McArthur turned on NetFlow on the remote clinic’s router and pointed it at his nearest NetFlow collector.

    “We found out very quickly that the inbound link to this clinic was receiving update traffic from Micro-soft, but outbound it was saturated because the clinicians were all hit-ting a particular electronic medical records server,” he said.

    “So we had two different prob-lems. It was saturated one way from the updates being run and the other way by people dealing with medical records. We were able to make some decisions on what to do immediately about the issue, and then they increased their bandwidth for the long-term. Without NetFlow, I would have had to go out with a portable analyzer and put a tap on the line.”

    FiNd ProBlems wiTh NeTFlow, dig deePer wiTh PacKeT aNalysisMost of the engineers who use Net-Flow get a lot of value out of it for

    higher level monitoring, Frey said. “Then they use packet analysis for the difficult problems.

    At Integra Telecom, a network communications and cloud services provider based in Vancouver, Wash., network support manager Jeff Wil-

    lard uses CA Technologies NetQoS NetFlow for visibility across his broader network, particularly at the peering transit edge so that he can detect network threats coming from customer locations. To increase vis-ibility, Willard is in the process of adding NetFlow in the aggregation points of his network too.

    “That will allow us to have a better understanding of our customers’ net-works and their usage and improve our ability to assist them with any problems or issues they have.”

    NetFlow is useful for adding con-text to better understand the infor-mation obtained through packet capture.

    “Having a raw pcap file to sort through with no idea of what you’re

    NetFlow is useful for adding con text to better understand the infor mation obtained through packet capture.

  • network evolution e-zine • february 2012 11

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    don't bid farewell to netflow

    looking for can be daunting,” Wil-lard said. “Leveraging NetFlow data to give you a better understanding of what is traversing the link…gives you a frame of reference for where to look within a packet capture.

    “Having NetFlow for the visibility and graphical representation of the network and using that for trend-ing and alerting can shed light on hotspots or conditions that we need to investigate further. Then we can sniff the wire for traffic at this par-ticular link or aggregation point.”

    iNTegraTioN oF NeTFlow aNd PacKeT-moNiToriNg Tools Needed

    As network operations teams increasingly use both NetFlow and packet monitoring technologies together for broader visibility, they will need tools that can offer a com-mon view of both sets of data—but there is no easy solution available.

    “If you’re trying to use a com-bined set of [packet capture and NetFlow] for monitoring, you need some method for bringing this data together in a common console.

    There is some work still to be done to bring these together.”

    McArthur of Texas Tech relies on Network Instruments for both his NetFlow and packet monitoring.

    “Since it’s the same interface, it makes it a lot easier to do our analy-sis,” McArthur said. “You’re not having to relearn a different way of doing things.”

    As more network engineers com-bine these methods, it’s likely that a set of integration tools will emerge. n

    As network operations teams increasingly use both NetFlow and packet monitoring technologies together for broader visibility, they will need tools that can offer a com mon view of both sets of data—but there is no easy solution available.

  • Cisco’s Unified Fabric supports your current environment while making it easier to meet new requirements for virtualized and non-virtualized workloads. It brings an open, evolutionary approach that provides architectural flexibility for the data center, using “wire-once” capabilities to simplify network management and boost asset utilization.

    Now you can quickly and efficiently meet ever-increasing business demands. Support new applications and legacy equipment while doing more with less. Lower operational expenses. Eliminate support cycles of your IT staff. And the results speak louder than words: 10,000+ Cisco NX-OS customers have deployed Cisco Unified Fabric to reduce costs, enhance performance, and provide operational efficiency.

    Transforming the data center into a more flexible and efficient system is not just something to talk about—it’s a reality with Cisco Unified Fabric.

    Find out more at www.cisco.com/go/uf

    ©20

    11 C

    isco

    Sys

    tem

    s, In

    c. A

    ll rig

    hts

    rese

    rved

    .

    flexible. efficient. transformative. yes, we’re talking about the data center.

  • network evolution e-zine • february 2012 12

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    q&a: devops for networking?

    tHErE’s A gooD CHAnCE you’ve never heard of DevOps, but if you’re a network engineer you’ll have to learn about it soon enough. DevOps, a composite of “development” and “operations,” is an IT industry movement driven by the software development commu-nity to integrate software development organiza-tions with IT operations.

    As with so many areas of IT innovation, the cloud is the catalyst for change when it comes to DevOps and the network. Since both system administra-tion and development can now take place in the cloud, the future will require developers to

    know system administration and system administrators to know pro-gramming.

    In fact, Steve Shah, director of product management at Citrix Sys-tems, sees DevOps as a new wave in system administration. Five years

    from now, the new sys-tems administrator will be programming APIs to replace the old school tasks of managing physi-cal infrastructure, he said. Networks will be a part of that, and network engi-neers will find themselves in meetings with DevOps teams asking them about the compatibility of their infrastructure with this new technology.

    devops for Networking?soFtware teaMs Lead data ceNter orchestratioNSystem admins and software developers are tying applications and infrastructure together for data center network automation. By Lisa saMpsoN

    Steve Shah, director of product management

    at Citrix Systems

  • network evolution e-zine • february 2012 13

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    q&a: software teams lead data center orchestration

    In this Q&A, Shah offers some background on DevOps:

    Why do networking pros need to know about DevOps?Let me start by providing a little bit of context about how we all came into the DevOps ecosystem. Citrix has a special kind of reverse proxy server that is designed to make web applications go faster. So we’re using—as a platform—the largest websites in the world, eBay and Amazon [Web Services (AWS)].

    As traffic is flowing into the web-based application, we’re able to do to a lot of optimization on that traf-fic. [This includes] everything from cleaning up TCP/IP and speeding up processing to actually changing the way that applications are accessed so that there’s quality-based access and controls for being able to des-ignate locations for that data. An image, for example, might be moved to one set of servers, and if it’s an application, [it could be sent] to a better set of servers. Managing all of this cost effectively could cause network problems, not to mention financial managerial issues.

    Where did the DevOps movement come from?System administrators started look-ing at things like automation when they found themselves dealing with

    tens of thousands of servers as early as 2003-04. They would come to us and say, ‘Someone has written a script that will change configuration and a bunch of software engineers changed some of the commands we were using to automate.’ So, they’d have to go back, change commands, adjust how it worked, etc. Around 2005, we started developing APIs for SILK-based access, and XML was all the rage.

    Citrix provided a SILK-based interface that people liked because it didn’t change. Even if the way that information was displayed did change, you could still count on the configuration. From the developer’s perspective, changing configuration on a version-by-version basis was cumbersome, so each API did a nice job of managing that problem.

    Secondly, programmatically accessing data made network

    Citrix provided a SILK-based interface that people liked be-cause it didn’t change. Even if the way that information was displayed did change, you could still count on the configuration.

  • network evolution e-zine • february 2012 14

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    q&a: software teams lead data center orchestration

    automation a lot cleaner. That kind of developed into a loop. And what made it really click for a lot of people—and what became the genesis of DevOps—is the fact that a lot of the admins were early on in this space, and they were all network administrators who knew cables, routers and switches, and a lot them knew how to do basic-

    level programming. They started turning to these tools to help them churn through their network issues because you take somebody who has a hundred infrastructure devic-es out there just doing network load balancing, and they want to aggre-gate that data. What were all these systems doing at a given moment, and how would servers react as a result? They would poll all of these devices for information, and then they would want to churn it up as data written to an API called Perl to really reach a logical process and come up with a solution to a problem. That way, managers could

    report back to say, for example, ‘Based on what I’ve learned, you to need to change your policy this way. I want you to redistribute the load in the network according to this new information.’

    That really drove the start of the idea that we shouldn’t be using gen-eral system management tools, that we should be using programming languages as our primary interface for managing our infrastructure.

    This idea has created a whole new kind of system administra-tion. Where system administrators before were valued based on their expertise with devices and infra-structure, the DevOps administra-tor is valued for his programming skills and his ability to understand infrastructure. It has almost become the case, in some places, where the ability to understand infrastructure is secondary to the programming ability. So with all of that motion happening, the next phase of DevOps was formalized around the availability of REST-ful (Representa-tional State Transfer) interfaces.

    Why did they feel the need to create new interfaces? As it turned out, more people start-ed using automation, and the deeper they got with it, they found that even though automation was great, and they were getting a piece of nice, well-structured data back, a simple

    It has almost become the case, where the ability to understand infrastructure is secondary to the programming ability.

  • network evolution e-zine • february 2012 15

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    q&a: software teams lead data center orchestration

    piece of data generated a lot of bulky XML as input and more bulky XML as output. There was a smarter way to do that. Web developers started using JSON (JavaScript Object Notation) because JavaScript programmers wanted to put a call in and get back a block of data that was literally parsed by the browser’s old JavaScript engine. They didn’t even have to do any additional pars-ing. It paved the way for people on the server side—like my engineers, who were having to write interfaces and support them. They could use the URL input or output to see the needed data using PERL. A simpli-fied URL was easier to write and tools were a lot more readily avail-able, so that really broadened up the number of available programmers. That’s the kind of shift that has to happen for DevOps to become a much bigger motion.

    How will network engineers work within a DevOps movement?First and foremost, you’ve got to get comfortable with writing scripts, automating basic tasks, even before you get fancy and talk about a lot of automation. That capability has always been there, but not many network engineers can manage it. So for adoption of that field to really become commonplace, you want to get into DevOps, understand it and leverage it. Once you’ve got-

    ten that foundation, something that you want to be able to do is get comfortable with the interfaces that your devices offer you. A lot of the companies that offer APIs are much like Citrix—we’ve been doing it for a couple of years already. The APIs are mature and documented, so you pick up the documentation and start. It’s easier than it’s ever been.

    From there, [rather than] go through how to add a piece of configuration, change a policy and things like that...I need to know my end-to-end workflow. I need to go and get, for example, two racks of servers up and running. And then I can really start to see the advantage of scripting. So I turn on the server and make sure its responding to me before I start it in traffic flow. By the time you’re done, you might have a monster amount of code, but you’re able to replicate it into the data cen-ter, so the time it took to write that

    You’ve got to get comfortable with writing scripts, automating basic tasks, even before you get fancy and talk about a lot of automation.

  • network evolution e-zine • february 2012 16

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    q&a: software teams lead data center orchestration

    script might have been the same amount of time to roll out a rack of equipment. But now, out of the 10 to 20 racks I had to deploy, I can take that one rack and roll it out over and over again, and it takes minutes.

    Do networking professionals need to open up their infrastructure to be manipulated by these scripting technologies?Short answer is yes. However, the devil’s in the details, so if you want to have access that is roles-based, controls are as important as ever. Like any process that gets defined in a data center, you will write down steps. Make sure people follow those steps over and over again. You want to have someone looking over your shoulder and make sure it works correctly and that it has no unintended side effects. Then when you execute this, you really have to leverage the API, and before you even know how to use the API you should be asking: Who has access, what can they do?

    Do you think that networking ven-dors are going to be introducing new products into the market that will support DevOps?Absolutely, I think if you look at the broader motion of what’s happening in networking, the big topic for all of this is fabric networking. And where you want to go with this is opti-mized routers and switches to make it less complicated, so you have one big full network—though you still have a bunch of other problems related to virtualization. To play that out a couple years forward, one of the areas you start seeing that is interesting technology is called OpenFlow. It’s all about being able to do programmatic controls for all the traffic going to the network. OpenFlow really puts DevOps at the center of how networks get man-aged. Search around OpenFlow and you’ll see links to all the products that have been created around it. It helps you automate all the scripts that you’ll use and so forth. Open-Flow is still an extremely nascent technology. The fabric movement as a whole is still nascent, but it’s hap-pening. Application delivery control-lers are transitioning to be integral to how the fabric operates; it’s a key part of how we see networking evolve over time with DevOps auto-mating policy in the network. n

    OpenFlow really puts DevOps at the center of how networks get man aged.

  • network evolution e-zine • february 2012 17

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    upgrading to 40 gigabit ethernet—testing required

    UPgrADIng FroM 10 gBE data cen-ter links to 40 GbE is the obvious answer to handling the ever increas-ing network traffic resulting from improved processor performance and virtualization. But faster back-bone links alone will not necessarily improve performance. Upgrading to 40 GbE will require a new level of testing that takes into consideration a number of elements.

    BeFore you uPgrade, some PerFormaNce ProBlems To coNsider

    Since the IEEE ratified 802.3ba for 40 and 100 GbE, networking vendors have launched a range of products aimed at improving link speed. Specifically, they are hoping to address the bottlenecking on 10 GbE links between racks caused by

    engineers replacing 1 GbE server network interfaces with 10 GbE interfaces and 10 GbE traffic com-ing to the top of the rack from each server.

    Before upgrading backbone links, there are a host of other problem areas and influencing factors to consider:

    pBackplane: In some cases, even though vendors have released high-er capacity interface cards, switch and router backplane throughput may not be sufficient to support all of the upgraded cards operat-ing at full capacity. It’s important to review product specifications carefully and consult independent product test results before simply replacing interface cards.

    The good news is that testing in this new environment is simplified

    upGradiNG to 40 GiGaBit etherNet—testiNG requiredWhen it comes to a 40 GbE upgrade, there’s a lot to consider for testing, including the needs of an FCoE environment. By david JacoBs

  • network evolution e-zine • february 2012 18

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    upgrading to 40 gigabit ethernet—testing required

    by the fact that while data rates have increased, nothing affecting higher protocol layers has changed. All of the familiar switching and routing technologies continue to operate as in the past with no extensive reconfiguration needed.

    Upgrading to a higher speed backbone may reveal problems elsewhere in the network. After installing the new hardware, per-form a series of network tests to determine whether the maximum in performance improvement has been achieved.

    psimultaneously upgrading to 10 gbe server interfaces and 40 gbe backbone: It’s best to start by upgrading the backbone links first, and then upgrade server interfaces after verifying that the backbone operates successfully. Finally, after upgrading server interfaces to enable higher per-server through-put, verify that the server virtual switches have not become a bottle-neck.

    pupgrading network performance testing equipment too: Many exist-ing testing and monitoring products don’t sufficiently handle a higher data rate. Products that gener-ate load and monitor performance for 40/100 GbE environments are available from such test equipment vendors as Spirent Communications and Ixia.

    OK, you’ve upgraded. What do you need for testing?

    pend-to-end throughput testing: This focuses on the primary reason for upgrading—the need to move greater volumes of data from server to server.

    Tests should include cases where traffic passes through a single backbone switch and where traffic passes through multiple switches.

    Flows from multiple servers must be generated either using hardware load simulators or software running within virtual machines to simulate operational patterns.

    Testing scenarios should include cases where load generators send streams of very short packets, as well as those with full-length pack-ets and packet bursts. Use network monitoring hardware and software to spot locations where bottlenecks result in long packet queues or dropped packets.

    Tests should include cases where traffic passes through a single backbone switch and where traffic passes through multiple switches.

  • network evolution e-zine • february 2012 19

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    upgrading to 40 gigabit ethernet—testing required

    It’s also important to test sensi-tive applications in order to make sure increased network through-put has not caused problems. For example, cluster heartbeat packets may have arrived out of order after traveling through upgraded top of rack and backbone switches.

    The TCP protocol uses conges-tion-control algorithms to control and limit the rate at which it trans-mits packets. Therefore, it may be necessary to modify parameters, such as send-and-receive window size, to achieve the full throughput supported by the network.

    However, because each network and pattern of usage is different, it’s not possible to specify the series of tests required for a specific network. Since it is not feasible to test every possible combination of data flow that may occur in the operational network or the path taken by each flow, it’s advisable to create a more general set of tests with a large number of packet flows between racks.

    pTesting for jitter: A variation in packet arrival rates can cause unac-ceptable pauses in voice and video transmissions. While other simu-lated applications place load on the network, it’s best to use hardware test generators to create streams of equally spaced packets. Mea-sure the variation in arrival time to determine if jitter remains within acceptable limits. If excessive jitter is detected, it may be necessary to modify VLAN priority configurations for packets that require constant delivery rates.

    pFibre channel over ethernet: This type of convergence requires zero packet loss. Disk traffic can be expected to grow along with network load as application perfor-mance increases. Verify that prior-ity settings determined for lower bandwidth network hardware are still appropriate for the upgraded network.

    pcable Testing should Not Be the issue in a 40 gbe upgrade: Cable testing is not an issue for the 40 GbE upgrade since cables used for 10 GbE links must be replaced. Earlier Ethernet upgrades required testing existing cables to determine whether they could support the higher rate.

    Cables supporting 40 GbE con-sist of ten individual fiber strands because vendors have not yet been

    Disk traffic can be expected to grow along with network load as application perfor mance increases.

  • network evolution e-zine • february 2012 20

    home

    idea lab

    don’t bid

    farewell to

    netflow

    Q&a:

    devops for

    networking?

    upgrading to

    40 gigabit

    ethernet—

    testing reQuired

    upgrading to 40 gigabit ethernet—testing required

    able to transmit 40 GbE over a single fiber-optic strand. As a result, traffic in each direction is split across four strands with each strand carrying 10 GbE. Two strands in the

    cable are unused, so cable replace-ment must accompany the change in switch and router interfaces. The integrity of the new cables should have been tested when installed.

    While the primary goal of testing is to verify that the maximum bene-fit was gained by upgrading network links, the time and effort expended can be used for a second benefit: Tests done now can help estimate how soon the next upgrade will be required. By continuing to increase generated traffic levels beyond current or near future expected operational levels, it is possible to determine the limits of the 40 GbE backbone. Undoubtedly, an upgrade to 100 GbE will be necessary at some future point. n

    david B. Jacobs, founder of the Jacobs Group, has more than 20 years of networking indus-try experience. he has managed leading-edge software development projects and consulted to fortune 500 companies as well as software start-ups.

    The integrity of the new cables should have been tested when installed.

    Networking Evolution Ezine is produced by TechTarget Networking Media.

    rivka gewirtz Little Executive Editor

    shamus Mcgillicuddy Director of News and Features

    Lisa sampson Feature Writer

    Kara gattine Senior Managing Editor

    Linda Koury Director of Online Design

    Kate gerwig Editorial Director

    tom Click Senior Director of Sales [email protected]

    techtarget 275 Grove Street, Newton, MA 02466

    www.techtarget.com

    © 2012 TechTarget Inc. No part of this publication may be trans-mitted or reproduced in any form or by any means with written

    permission from the publisher. TechTarget reprints are available through The YGS Group.

    About TechTarget: TechTarget publishes media for infor-mation technology professionals. More than 100 focused websites enable quick access to a deep store of news, advice and analysis about the technologies, products and processes crucial to your job. Our live and virtual events give you direct access to independent expert commentary and advice. At IT Knowledge Exchange, our social community, you can get advice and share solutions with peers and experts.

    mailto:tclick%40techtarget.com?subject=http://reprints.ygsgroup.com/m/techtarget