Green Networking Jennifer Rexford Computer Science Department Princeton University

  • View
    219

  • Download
    6

Embed Size (px)

Transcript

  • Slide 1
  • Green Networking Jennifer Rexford Computer Science Department Princeton University
  • Slide 2
  • Router Energy Consumption 2
  • Slide 3
  • Internet Infrastructure 3 router link
  • Slide 4
  • Router Energy Consumption Millions of routers in the U.S. Several Tera-Watt hours per year $2B/year electric bill 4 Line cards draw ~ 100W (Source: National Technical Information Service, Department of Commerce, 2000. Figures for 2005 & 2010 are projections.) 200-400 W
  • Slide 5
  • Opportunities to Save Energy Networks over-provisioned with extra capacity Diurnal shifts in traffic due to user behavior 5
  • Slide 6
  • Powering Down the Network Equipment is not energy proportional Energy is nearly independent of load Turning off parts of the network Entire router Individual interface card While avoiding transient disruptions Data traffic relies on the underlying network Failures lead to transient packet loss and delay 6 Shut down routers and interfaces without disruptions
  • Slide 7
  • Brief Background on Routers 7
  • Slide 8
  • Router Architecture 8 Switching Fabric Processor Line card data plane control plane
  • Slide 9
  • Data, Control, and Management 9 DataControlManagement Time- scale Packet (nsec) Event (10 msec to sec) Human (min to hours) TasksForwarding, buffering, filtering, scheduling Routing, signaling Analysis, configuration LocationLine-card hardware Router software Humans or scripts
  • Slide 10
  • Data Plane: Router Line Cards Interfacing Physical link Switching fabric Packet handling Packet forwarding Decrement time-to-live Buffer management Link scheduling Packet filtering Rate limiting 10 to/from link to/from switch lookup Receive Transmit
  • Slide 11
  • Control Plane: Routing Protocols Routing protocol Routers talk amongst themselves To compute paths through the network Routing convergence After a topology change Transient period of disagreement Packets lost, delayed, or delivered out-of-order Major disruptions to application performance 11
  • Slide 12
  • The Rest of the Talk: Two Ideas Power down networking equipment To reduce energy consumption While minimizing disruption to applications Power down a router Virtual router migration Similar to virtual machine migration Power down an interface Shutting down cables in a bundled link Similar to dynamic frequency voltage scaling 12
  • Slide 13
  • VROOM: Virtual ROuters On the Move Joint work with Yi Wang, Eric Keller, Brian Biskeborn, and Kobus van der Merwe (AT&T) http://www.cs.princeton.edu/~jrex/papers/vroom08.pdf
  • Slide 14
  • Virtual ROuters On the Move Key idea Routers should be free to roam around Useful for many different applications Reduce power consumption Simplify network maintenance Simplify service deployment and evolution Feasible in practice No performance impact on data traffic No visible impact on routing protocols 14
  • Slide 15
  • The Two Notions of Router IP-layer logical functionality, and physical equipment 15 Logical (IP layer) Physical
  • Slide 16
  • Tight Coupling of Physical & Logical Root of many network-management challenges (and point solutions) 16 Logical (IP layer) Physical
  • Slide 17
  • VROOM: Breaking the Coupling Re-mapping logical node to another physical node 17 Logical (IP layer) Physical VROOM enables this re-mapping of logical to physical through virtual router migration.
  • Slide 18
  • Case 1: Power Savings 18 Contract and expand the physical network according to the traffic volume
  • Slide 19
  • Case 1: Power Savings 19 Contract and expand the physical network according to the traffic volume
  • Slide 20
  • Case 1: Power Savings 20 Contract and expand the physical network according to the traffic volume
  • Slide 21
  • Case 2: Planned Maintenance NO reconfiguration of VRs, NO reconvergence 21 A B VR-1
  • Slide 22
  • Case 2: Planned Maintenance NO reconfiguration of VRs, NO reconvergence 22 A B VR-1
  • Slide 23
  • Case 2: Planned Maintenance NO reconfiguration of VRs, NO reconvergence 23 A B VR-1
  • Slide 24
  • Case 3: Service Deployment/Evolution Move (logical) router to more powerful hardware 24
  • Slide 25
  • Case 3: Service Deployment/Evolution VROOM guarantees seamless service to existing customers during the migration 25
  • Slide 26
  • Virtual Router Migration: Challenges 26 1.Migrate an entire virtual router instance All control plane & data plane processes / states
  • Slide 27
  • Virtual Router Migration: Challenges 27 1.Migrate an entire virtual router instance 2.Minimize disruption Data plane: millions of packets/sec on a 10Gbps link Control plane: less strict (with routing message retrans.)
  • Slide 28
  • Virtual Router Migration: Challenges 28 1.Migrating an entire virtual router instance 2.Minimize disruption 3.Link migration
  • Slide 29
  • Virtual Router Migration: Challenges 29 1.Migrating an entire virtual router instance 2.Minimize disruption 3.Link migration
  • Slide 30
  • VROOM Architecture 30 Dynamic Interface Binding Data-Plane Hypervisor
  • Slide 31
  • Key idea: separate the migration of control and data planes 1.Migrate the control plane 2.Clone the data plane 3.Migrate the links 31 VROOMs Migration Process
  • Slide 32
  • Leverage virtual server migration techniques Router image Binaries, configuration files, etc. 32 Control-Plane Migration
  • Slide 33
  • Leverage virtual server migration techniques Router image Memory 1 st stage: iterative pre-copy 2 nd stage: stall-and-copy (when the control plane is frozen) 33 Control-Plane Migration
  • Slide 34
  • Leverage virtual server migration techniques Router image Memory 34 Control-Plane Migration Physical router A Physical router B DP CP
  • Slide 35
  • Clone the data plane by repopulation Enable migration across different data planes Avoid copying duplicate information 35 Data-Plane Cloning Physical router A Physical router B CP DP-old DP-new
  • Slide 36
  • Data-plane cloning takes time Installing 250k routes may take several seconds Control & old data planes need to be kept online Solution: redirect routing messages through tunnels 36 Remote Control Plane Physical router A Physical router B CP DP-old DP-new
  • Slide 37
  • Data-plane cloning takes time Installing 250k routes takes over 20 seconds Control & old data planes need to be kept online Solution: redirect routing messages through tunnels 37 Remote Control Plane Physical router A Physical router B CP DP-old DP-new
  • Slide 38
  • Data-plane cloning takes time Installing 250k routes takes over 20 seconds Control & old data planes need to be kept online Solution: redirect routing messages through tunnels 38 Remote Control Plane Physical router A Physical router B CP DP-old DP-new
  • Slide 39
  • At the end of data-plane cloning, both data planes are ready to forward traffic 39 Double Data Planes CP DP-old DP-new
  • Slide 40
  • With the double data planes, links can be migrated independently 40 Asynchronous Link Migration A CP DP-old DP-new B
  • Slide 41
  • Virtualized operating system OpenVZ, supports VM migration Routing protocols Quagga software suite Packet forwarding Linux kernel (software), NetFPGA (hardware) Router hypervisor Our extensions for repopulating data plane, remote control plane, double data planes, 41 Prototype Implementation
  • Slide 42
  • Experiments in Emulab On realistic Abilene Internet2 topology 42 Experimental Evaluation
  • Slide 43
  • Data traffic Linux: modest packet delay due to CPU load NetFPGA: no packet loss or extra delay Routing-protocol messages Core router migration (OSPF only) Inject an unplanned link failure at another router At most one retransmission of an OSPF message Edge router migration (OSPF + BGP) Control-plane downtime: 3.56 seconds Within reasonable keep-alive timer intervals All routing-protocol adjacencies stay up 43 Experimental Results
  • Slide 44
  • Where To Migrate Physical constraints Latency E.g, NYC to Washington D.C.: 2 msec Link capacity Enough remaining capacity for extra traffic Platform compatibility Routers from different vendors Router capability E.g., number of access control lists (ACLs) supported Constraints simplify the placement problem By limiting the size of the search space 44
  • Slide 45
  • Conclusions on VROOM VROOM: useful network-management primitive Separate tight coupling between physical and logical Simplify management, enable new applications Evaluation of prototype No disruption in packet forwarding No noticeable disruption in routing protocols Future work Migration scheduling as an optimization problem Extensions to hypervisor for other applications 45
  • Slide 46
  • Greening Backbone Networks: Shutting Off C

Recommended

View more >