18
White Paper 26601 Agoura Road, Calabasas, CA 91302 | Tel: 818.871.1800 | Fax: 818.871.1805 | www.ixiacom.com | 915-2950-01 Rev A August 2011 Testing the Cloud: Definitions, Requirements, and Solutions

Cloud Testing

Embed Size (px)

DESCRIPTION

Article about cloud testing

Citation preview

White Paper

26601 Agoura Road, Calabasas, CA 91302 | Tel: 818.871.1800 | Fax: 818.871.1805 | www.ixiacom.com | 915-2950-01 Rev A August 2011

Testing the Cloud: Definitions, Requirements, and Solutions

2

Contents

Cloud Services. ..................................................................................................... 3

Data Center Trends ............................................................................................... 4

Virtual Desktop Infrastructure. ............................................................................. 7

Testing the Cloud. ................................................................................................. 8

Security Testing ...................................................................................................15

Conclusion . ......................................................................................................... 17

3

The resources of the cloud, while owned and maintained by a “cloud service provider”, are often ”borrowed” by the enterprise.

What is the cloud? The details are still evolving, but for most enterprises the cloud is a set of services, data, resources, and networks located “elsewhere.” This contrasts with the historical centralized data center model – where enterprises purchased, configured, deployed, and maintained their own servers, storage, networks, and infrastructures.

Cloud ServicesThe resources of the cloud, while owned and maintained by a “cloud service provider”, are often ”borrowed” by the enterprise. There are three acknowledged types of service offerings:

Software-as-a-Service – examples include Salesforce.com, Google Apps, SAP, Taleo, WebEx, and Facebook. These are full-service applications accessed from anywhere on the Internet. These services are implemented through the use of distributed data centers.

Platform-as-a-Service – examples include Windows Azure, Google AppEngine, Force.com, Heroku, and Sun/Oracle. These are distributed development platforms used to create applications, web pages, and services that run in cloud environments.

Infrastructure-as-a-Service – as offered by VMware, Citrix, Dell, HP, IBM, Cisco, F5, Juniper, and others. These companies offer the building blocks of cloud services that are available through a number of cloud hosting services such as Amazon’s Elastic Computing Cloud (EC2). They include a virtualization layer, database, web, and application servers, firewalls, server load balancers, WAN optimizers, routers, and switches.

Hosted Application Software

Infrastructure Software/OS

Virtualization Laver

Servers – DB, Web, App

Firewalls, SLB, Wan Opt,Switching/Routing

Platform-as-a-Servce

Software-as-a-Servce

Infrastructureas-a-Service

SalesForce.com, Google Apps,Work Day, Taleo, Oracle.com, RightNow, SAP, Netsuite, Webex,IBM Lotus, Facebook

Azure, Google AppEngine, Force.com, Herokum Sun

VMWare, Citrix XEN

Dell, HP, IBM. Fujitsu, Cisco

Cisco, F5, Juniper, Blue Coat, etc.,

+ +

Figure 1: Three Primary Types of Cloud Service Providers

4

Why have major applications and web sites moved to the cloud? One of the biggest reasons is the widespread availability of broadband networks such as 10 Gigabit Ethernet (GE) that connect the enterprise with cloud providers’ sites. Broadband to the home has created an expectation of flawless delivery for bandwidth-hungry, high resolution content – which is better served by distributed cloud providers that own higher bandwidth connections and more storage. The use of a cloud-based infrastructure means there is no local infrastructure to purchase, manage, secure, or upgrade. Rather than attempting to estimate peak and growth data center usage, enterprises can adopt a pay-as-you-go structure, paying for only what they use.

Cloud elasticity, scalability, and performance are perhaps the most compelling reasons to adopt a cloud strategy. Computing, storage, and network resources are easily and quickly deployed using cloud providers – allowing an enterprise’s internal applications and/or external web site elastic adaptation to demand. This elasticity also provides the means of scaling to any size desired, and to match performance requirements and ensure customer SLAs are maintained and end user experience unaffected during peak utilization.

Some cloud service providers offer access control and encryption services that enable the safe storage of sensitive company and customer data. Such services are often more secure than those available with local IT staff and facilities.

Data Center TrendsThe key technological advance that makes cloud computing financially viable is server virtualization – the ability to run many virtual machines, each with its own resources, on a single, powerful host. Host systems typically use powerful processing blades that contain, or are attached to, substantial storage and network resources.

Running multiple applications and operating systems on a single server greatly increases server utilization, averaging-out server load. This, in turn, leads to greater reduction in the number of servers that enterprises must purchase, deploy, operate, maintain, power, cool, and house.

Once applications are configured to run on virtual machines that share server blades, they achieve a level of portability that enables flexibility, scalability, and performance guarantees. Better network responsiveness is achieved when applications that regularly communicate with each other run on the same sever.

Throughout the world, private companies and government agencies are increasing the rate at which they use server virtualization. As reported by Network World, Gartner expects the share of server workloads being run on virtualized servers to grow from 18 percent

Broadband to the home has created

the expectation for bandwidth-hungry,

high resolution content – which is

better served by distributed cloud

providers with higher bandwidth connections and

more storage.

5

As reported by Network World, Gartner expects the share of server workloads being run on virtualized servers to grow from 18 percent in October of 2009 to 28 percent in 2010 to nearly 50 percent by 2012.

in October of 2009 to 28 percent in 2010 to nearly 50 percent by 2012. Gartner also observes that 48% of enterprises deploy new servers on virtual machines by default.

Server virtualization is further enhanced by server I/O consolidation, which merges the two major data center networks, Ethernet and fibre channel, onto a single “lossless” Ethernet. Substantial savings results from the elimination of duplicate server interfaces,

Server Consolidation

Recently, the U.S. federal government announced one of the largest data center consolidation projects in history, in which more than 1,100 data centers across the United States will be consolidated into a smaller number of much larger data centers. Leading corporations such as HP and Intel are also undertaking massive data center consolidation. For example, HP is consolidating 85 data centers into 6, and Intel, 133 data centers to 8 new, highly-dense data centers.

Server consolidation enables more flexible and efficient allocation of server resources, and reduces the need for floor space, electricity, and cooling. Efficiency is particularly important in light of imminent government regulation. According to SPECpower, a server must be loaded at 60% or more to be declared “green.”

In the process of virtualization, the number of users associated with an application on a single physical server can increased from perhaps 200 to over 10,000 when running on a shared virtual host. Each processor in a virtualized server can run 64 or more virtual machines (VMs) – each VM potentially running a different application. Surveys conducted by IDC routinely report customers standardizing on consolidation ratios of 10 to 1 or more and leading-edge users deploying 25, 30, or even 40 VMs per physical server.

With each increase of VMs on a server, there is a corresponding increase in I/O requirements. While one application per server might typically drive a few hundred megabits per second of I/O bandwidth, running multiple applications on a single server

6

using virtualization technologies can generate tens of gigabits per second of data. That means that the network infrastructure must be upgraded to accommodate a much higher level of I/O performance.

Network Convergence

Hand-in-hand with server consolidation is network convergence. As 10GE networks became affordable, it became evident that separate Ethernet and fibre channel networks were not necessary. To a lesser extent, high speed inter-processor communications (IPC) used for high performance cluster computing could be merged as well. Recent surveys have shown that cabling for a single rack can run as much as $10,000. Therefore, the savings can be quite substantial if a single n etwork and cabling are used for multiple technologies and functions.

The savings start at the server, where the need for four or six interfaces used as redundant pairs for Ethernet, fibre channel, and IPC networks can be consolidated to a pair of converged network adapters (CNAs). Little or no software modifications are needed, as the CNA offers the same internal interfaces to client applications as the three separate interfaces. The CNA applies the necessary protocols to combine the networks. This requires a number of data center bridging (DCB) protocols, of which fibre channel over Ethernet (FCoE) is a key component.

Much of the complexity stems from the disparate networking principles between Ethernet and fibre channel infrastructures. Ethernet is a best-efforts network, allowing for dropped and lost packets. Fibre channel, on the other hand, is an inherently lossless network. To reconcile this difference, providers must implement lossless Ethernet features such as priority-based flow control (PFC) and data center bridging exchange (DCBX). In this way, delay-sensitive traffic is recognized and given priority over data traffic.

All Trafficover 10GE

FC Traffic

Enet Traffic

IPC Traffic

IPC Traffic

c

FC Traffic

Enet Traffic

FC HBA

CNA

CNA

FC HBA

FC HBA

FC HBA

FC HBA

FC HBA

Figure 2: CNAs Consolidate Ethernet, Fibre Channel, and IPC Interfaces

A new type of unified switch, called a fibre channel forwarder (FCF), is needed in the data center. This switch serves as the bridge between converged networks, fibre channel, FCoE, and iSCSI devices. In this way, legacy fibre channel devices are accommodated side by side with Ethernet-based FCoE and iSCSI devices.

Recent surveys have shown that cabling

for a single rack can run as much as $10,000. Therefore, the savings can be

quite substantial if a single network and

cabling are used for multiple technologies

and functions.

7

The virtual desktop infrastructure (VDI) brings virtualization to the end-user’s computer, and is one of the fastest-growing cloud services.

Server

Enhanced EthernetNIC

TCP/IPUDP

SCSI FCPFICON

iWarp

Networking Storage Clustering

Enhanced EthernetFabric

Figure 3: Unified NIC Combining Network Elements to Create Enhanced Ethernet Fabric

Virtual Desktop InfrastructureThe virtual desktop infrastructure (VDI) brings virtualization to the end-user’s computer, and is one of the fastest-growing cloud services. It is a new form of server-side virtualization in which a virtual machine on a server hosts a single virtualized desktop. VDI is a popular means of desktop virtualization as it provides a fully customized user desktop, while still maintaining the security and simplicity of centralized management. Leading vendors in this area are Microsoft, VMware, and Citrix, employing technologies such as:

PCoIP – allows all enterprise desktops to be centrally located and managed in the data center, while providing a robust user experience fo remote users.

Remote desktop protocol (RDP) – a Microsoft proprietary protocol which is an extension of the ITU-T T.128 application sharing protocol allowing a user to graphically interface with another computer.

Citrix independent computing architecture (ICA) – a Citrix proprietary protocol that is a platform-independant means of exchanging data between servers and clients.

Network performance is a key factor here. As described in a posting on the Citrix Blog, Table 1 estimates the amount of bandwidth that might be used by XenDesktop users. It doesn’t require many users to hit tens of Mbps.

8

Activity XenDesktop Bandwidth

Office43 Kbps

Internet 85 Kbps

Printing 573 Kbps

Flash Video 174 Kbps

Standard WMV Video 464 Kbps

High Definition WMV 1812 Kbps

Table 1: Estimates of Virtual Desktop Bandwidth Usage (source: Citrix blog)

Testing the Cloud The virtualized data center, whether within the enterprise or located at a cloud service provider, must be properly provisioned in order to provide the necessary functions and performance of cloud-based applications. Testing of cloud services has some familiar aspects and some new challenges. Even though they will be used in a cloud environment, the basic components that populate the data center need to be tested for functionality, performance, and security. This is complemented with testing of the data center and end-to-end services.

At the network interconnectivity infrastructure level, testing must validate:

• Routers

• Switches, including fibre channel forwarders

• Application delivery platforms

• Voice over IP (VoIP) gateways

At the server and storage infrastructure level, testing must validate:

• Data center capacity

• Data center networks

• Storage systems

• Converged network adapters

The virtualized data center, whether

within the enterprise or located at a cloud

service provider, must be properly

provisioned in order to provide the necessary functions

and performance of cloud-based

applications.

9

Application delivery controllers are an important component of the modern cloud data center. Using deep packet inspection, they look at every bit of application layer traffic in order to classify and prioritize traffic.

At the virtualization level, testing must validate:

• Virtual hosts

• Video head ends

• VM instantiation and movement

At the security infrastructure level, testing must validate:

• Firewalls

• Intrusion Prevention Systems (IPS)

• VPN gateways

Network Interconnectivity Infrastructure Level

• Routers

• Switches, including fibre channel forwarders

• Application delivery platforms

• Voice over IP (VoIP) gateways

Each of the networking components used within the data center must be thoroughly tested for conformance to standards, functionality, interoperability, performance, and security before deployment. This type of testing is the bread and butter of network testing companies such as Ixia.

Ixia’s test solutions cover the wide range of data center network testing. Ixia’s chassis house up to 12 interface cards, which include Ethernet speeds from 10Mbps to 100Gbps; high-density solutions for 1Gbps and 10Gbps are available. Direct fibre channel interfaces are used for storage area network (SAN) testing. Each test port is backed by substantial special-purpose traffic generation hardware, and substantial compute power and memory.

Ixia test ports are programmed and used for specific areas of testing by Ixia test applications, principally:

• IxNetwork – tests routers and switches and other layer 2/3 network devices. Routers are tested through the use of emulation; an environment of tens to thousands of routers can be created and attached to the device under test (DUT). Both switches and routers are tested through line-rate, complex traffic on multiple ports. IxNetwork has special support for DCB protocols associated with FCoE switches and CNAs that bridge the gap between fibre channel storage arrays and FCoE/Ethernet networks.

• IxLoad – tests application-layer devices such as web servers and application delivery controllers. These devices are likewise tested through emulation of end-users of web, data, voice, and video services. Subscriber communities in the tens of thousands are emulated, generating large volumes of requests against Internet services. Web servers

10

and other services are emulated as well – allowing tests of the routers, switches, and other devices that transmit and shape traffic. Application delivery controllers are an important component of the modern cloud data center. Using deep packet inspection, they look at every bit of application layer traffic in order to classify and prioritize traffic. Full, stateful emulation of end-user application usage is required to test them.

• IxLoad-Attack – is an add-on to IxLoad that tests the effectiveness of network security appliances and software security mechanisms. Security testing is covered in a separate section of this white paper.

Server and Storage Infrastructure Level

• Data center capacity

• Data center networks

• Storage systems

• Converged network adapters

These system-level components are critical to the basic operation of the data center. Their performance and capacity must be carefully measured to determine the overall capacity of the cloud data center. Performance measurements of these application services are communicated through key performance indicators (KPIs). Some of the KPIs for standard services are shown in the following table.

11

Quality of experience (QoE), delivering services that are perceived as error-free by the customer, is the key goal for multiplay network devices.

Traffic Type Application Key PerformanceIndicators

Data Services

HTTPFTPE-mailStreaming videoVoIP

• Number of users• Connections per second• Transactions per second• Number of concur-rent connections• Throughput• End-user QoE• Server utilization

InternetOracleSQLMAPI

Queries per secondTransactions per secondThroughputServer utilization

PrintingCIFS, NFSiSCSISCSI, FCoE

Read/write rateI/O rateTansaction latencyThroughputServer utilization

Table 2: Sample Key Performance Indicators

Quality of experience (QoE), delivering services that are perceived as error-free by the customer, is the key goal for multiplay network devices. The individual attributes that contribute to QoE differ by service type:

• Voice traffic consumes fairly low bandwidth, but is highly-sensitive to jitter.

• Video services require a steady stream of high-bandwidth traffic, and are severely impacted by packet reordering or loss.

• Data services are often delivered with best-efforts, and relatively insensitive to network impairments, but can require large amounts of bandwidth.

Emulation is likewise the key to testing system components, with IxLoad as the principle Ixia product used in these measurements.

12

Peer-to-Peer

Internet IPTV& Video

VoicePeer- V

Figure 4: IxLoad Simulates Voice, Video, and Data S ervices to Verify Performance

Prior to virtualization, direct Ethernet connections to data/voice/video servers could be made to measure their performance. When those services are virtualized, however, there’s no easy location to make the necessary connection.

Physical Servers

ServerVirtualization

VirtualMachines

Figure 5: Virtual Test Interfaces

An additional plug-in for IxLoad is now available to test within a virtual environment:

• IxLoad-VM – a virtualized implementation of an Ixia test port is used to emulate subscribers and services. The full scope of IxLoad’s abilities to emulate client requests and server responses are available from a VM.

The virtualized environment moves the power of performance measurement from the pre-deployment lab to the live network. Virtual environments can be created for the express purpose of measuring the performance of a live, configured network without requiring any downtime. A number of critical measurements are possible when using IxLoad-VM:

The virtualized environment moves the power of performance measurement from the pre-deployment lab to the live network.

13

• Measurement of KPIs when clients and servers are located on the same physical host. This eliminates the overhead associated with external networks and allows measurement of virtual switch latency and throughput.

• Measurement of KPIs when clients and servers reside on different virtual hosts in the live data center. This permits direct measurement of the latency, throughput, and responsiveness of the data center’s network.

The performance of storage systems, whether fibre channel systems directly connected to servers’ host bus adapters using fibre channel, or connected through CNAs to FCoE switches, are critical to cloud performance. An additional IxLoad plug-in is designed for this purpose:

• IxLoad-I/O – a capability of IxLoad-VM that generates I/O requests to storage arrays. This provides storage system characterization without the overhead of application performance.

Virtual desktop servers require a different technique. The operation of such servers is determined by the tasks performed by the virtual desktop clients. In order to test the performance and capacity of such servers, an enterprise-specific set of functions must be generated. This type of testing can be performed using:

• IxLoad-VDI – a scalable VDI solution that is used to assess service delivery network performance and server capacity. IxLoad-VDI interfaces with target platforms from VMware, Microsoft, and Citrix. A set of customizable workload scripts are used to emulate users. During the testing process, end-user transaction and server-side latencies are measured along with server performance metrics that validate capacity: free memory, CPU utilization, and I/O counters.

Virtualization Infrastructure Level

• Virtual hosts

• Video head ends

• VM instantiation and movement

The critical cloud infrastructure measurements are:

• Service availability

• Elasticity and scalability

• Application QoE

• Security and access control

Performance measurement is accomplished through the use of end-user and traffic emulation, which exercises applications and data center infrastructures. The KPIs for these four categories are shown in the following table.

14

Service Type Elasticity Scalability

Fail-over timeData/storage replication switch-over timeUptime and QoE impact during VM migration

Fail-over timeData/storage replication switch-over timeUptime and QoE impact during VM migration

Application QoE Security and Access Control

Transaction latencyTransaction rate and throughputI/O rate and latency

Protection against vulnerabilitiesProtection against DDoS attacksData integrityData leakage and confidentiality

Table 3: Primary KPI Measurements for Cloud Infrastructure

Two types of traffic are needed to measure these KPIs:

• End-user to application. This requires realistic, stateful emulation of end-users’ interactions with the applications running on virtualized servers. This type of traffic is known as north-south, as shown in the Figure 6. Traffic flows between the users (north) and the servers (south). Such testing can be accomplished with:

• IxLoad – using traffic that originates at a customer’s site or within the data center. Using Ixia hardware interfaces, any volume of traffic can be generated and measured.

• IxLoad-VM – using traffic that originates within virtual machines in the data center.

• IxLoad-VDI – using traffic that originates within virtual desktop servers in the data center.

• Server-to-server and server-to-storage. Servers inter-communicate for several reasons: when an application is implemented in multiple tiers and in the process of VM migration – moving virtual machines from one physical host to another. Server-to-storage traffic is used to access user and application data in a converged data center. This type of traffic is known as east-west and comprises 80% of all data center traffic. Such testing can be accomplished with IxLoad, IxLoad-VM, and IxLoad-VDI as mentioned above, and with:

• IxLoad-I/O – generating direct server to storage traffic from within a server VM.

Emulated east-west traffic, in particular, when used with a diverse set of applications can be very effective in measurement of server capacity and cloud infrastructure scalability. It can also be used to validate the data integrity of transactions. Each type of application and infrastructure traffic comes with its own KPIs.

Performance measurement is

accomplished through the use of end-user and

traffic emulation, which exercises

applications and data center infrastructures.

15

Emulated east-west traffic, in particular, when used with a diverse set of applications can be very effective in measurement of server capacity and cloud infrastructure scalability.

Servers

Live VM MigratingEast-West HTTP Server-Server

Traffic Flows

Server Lookup & Data Retrieval:East-West Server-Database/

Database-Server Traffic Flows

Internet Browsing:North-South Client-Server/South-North Server-Client

Traffic Flows

IP-Based StorageRequests & Retrieval:

East-West Server-Storage/Storage-Server Traffic Flows

r

rs

Servers

Servers

Storage

Clients

Figure 6: North-South and East-West Traffic Flows in a Data Center

Security TestingNetwork security in a cloud environment is particularly important. Classical data centers can secure their facilities through the “front door” that connected them to the Internet or other corporate sites. Not so in a cloud environment. Each cloud computing and storage component can be located at a different physical location and connected over the Internet or private networks. Each of the connections is a potential security risk.

A number of dedicated security appliances are in widespread use, protecting enterprises and data centers worldwide. The culmination of the development of these devices is the unified threat management (UTM) system that encompasses the roles of firewalls, intrusion prevention systems, anti-virus, anti-spam, and data loss prevention.

Virtual security applications are becoming widespread in the cloud environment. These software-only, VM-aware implementations of security functions are distributed between components of cloud applications. They serve to protect each component from other traffic on shared networks and other VMs on virtualized servers.

Regardless of whether they are physical or virtual and where they are placed in the data center, security mechanisms must be tested thoroughly in three dimensions:

• Effectiveness – do the security mechanisms effectively defend against the attacks they were designed to prevent?

• Accuracy – does it produce any false positives?

• Performance – do the security mechanisms pass an acceptable amount of traffic?

The last category is extremely important. Security devices have a difficult job to do: watching all traffic on high speed links, inspecting for malware, fending off denial of service attacks, etc. They must be able to find and prevent attacks when processing large amounts of traffic. Likewise, they must pass an acceptable amount of “normal” traffic

16

when under heavy attack. A security device that cannot prevent penetration when under full load is easily defeated. A security device that blocks critical business applications when under attack has effectively been defeated.

Testing of network security devices requires a number of techniques, which will be discussed in the next few sections:

• Known vulnerability testing

• Distributed denial of service

• Line-rate multiplay traffic

• Encrypted traffic

• Data leakage testing

Known Vulnerability Testing

Known vulnerability testing is the cornerstone of network security device testing. Attacks are mounted against security mechanisms by using a large database of known malware, intrusions, and other attacks. A number of organizations exist to maintain this list. One leading organization is the U.S. National Vulnerability Database maintained by the National Institute of Standards and Technology (NIST). The Mitre Corporation provides access to this database, called the CVE—Common Vulnerabilities and Exposures. As of May 2010, more than 42,000 vulnerabilities are listed, with more than 15 added on a daily basis.

Proper security testing requires that a number of known vulnerabilities be applied to security devices at a significant percentage of line-rate. The device under test (DUT) should properly reject all such attacks, while maintaining a reasonable rate of transmission of “good” communications.

In addition, known vulnerabilities must be applied using the wide variety of evasion techniques. The combination of thousands of known vulnerabilities and dozens of evasion techniques requires that a subset of all possibilities be used for testing. Test tools offer representative samples, including special cases for newly published vulnerabilities.

Distributed Denial of Service

Denial of service attacks often use large numbers of computers that have been taken over by hackers. Those computers, called “zombies”, use dozens of attack techniques designed to overload network and security devices. This type of testing requires test equipment capable of simulating thousands of computers.

Devices must be tested to ensure that none of the denial of service attacks, singly or in combination, is able to disable the device. In addition, the ability of the DUT to accept new connections and provide an acceptable level of performance must be measured.

Line-Rate Multiplay Traffic

Not only must security devices fend off attacks, but they must pass non-malicious traffic at the same time. To ensure this, testing for defense against attacks must be done with a

Each cloud computing and

storage component can be located at a

different physical location and

connectd over the Internet or private networks. Each of the connections is

a potential security risk.

17

Proper security testing requires that a number of known vulnerabilities be applied to security devices at a significant percentage of line-rate.

background of real-world multiplay traffic. That is, a mix of voice, video, data, and other services that constitute normal traffic should be applied to the DUT such that the sum of the malicious and normal traffic is the maximum for the device’s interfaces.

The QoE for each of the normal services must be measured to ensure that end users’ satisfaction will not be sacrificed. For example, VoIP requires very little bandwidth, but latency and jitter impairments are immediately heard by the human ear.

Encrypted Traffic

As enterprises move to connect their multiple sites and mobile and remote users together into a corporate virtual private network (VPN), data encryption is becoming increasingly important. Data encryption ensures both privacy and authentication of the sending party through the use of certificates and other techniques.

The process of establishing an encrypted link, and then subsequent encryption and decryption can be a significant load for a security device. It is essential that a realistic mix of encrypted traffic be mixed with clear traffic during performance testing. The rate at which encrypted connections can be established is particularly important, representing how quickly a network can resume normal operation after an outage.

Data Leakage Testing

Data leakage testing involves transmission of data from the ‘inside-out’ to determine if data loss prevention devices will detect the leakage of proscribed information. All outbound means must be tested, including e-mail, e-mail attachments, Web-based mail, Web form data, FTP, and IM.

Enterprises must create test cases for each of the rules, keywords, and policies that they use in the security device, including tests that should not be flagged. Network equipment manufacturers (NEMs) have a more difficult job—requiring a more extensive set of test cases that exercise each type of rule and policy, along with a sampling of keywords.

ConclusionTesting of cloud components and systems requires a variety of techniques – some standard and some new. The very thing that makes cloud computing so attractive, virtualization, poses an interesting challenge for network and application testing. Virtualization of the test tools themselves provides the key to testing the cloud.

White Paper

26601 Agoura Road, Calabasas, CA 91302 | Tel: 818.871.1800 | Fax: 818.871.1805 | www.ixiacom.com | 915-2950-01 Rev A August 2011

This material is for informational purposes only and subject to change without notice. It describes Ixia's present plans

to develop and make available to its customers certain products, features, and functionality. Ixia is only obligated to

provide those deliverables specifically included in a written agreement between Ixia and the customer.

IxIa WorldWIde Headquarters 26601 Agoura Rd. Calabasas, CA 91302

(toll Free NortH amerIca) 1.877.367.4942

(outsIde NortH amerIca) +1.818.871.1800

(Fax) 818.871.1805 www.ixiacom.com

otHer IxIa coNtacts

INFo: [email protected]: [email protected] relatIoNs: [email protected]: [email protected]: [email protected]: [email protected]: [email protected]