10
optionshttp://searchstorage.techtarget.com/news/1000886/SAN- zoning-resource-guide SAN topologies, part 1: Know your switch Most typical switch vendors can sell you switches with 8, 16, 32 and 64 ports. Sometimes you'll find other mid-size switches -- 24 or 48 ports -- and other times you'll find larger switches -- 128, 140 and 256 ports. So let's start by making some assumptions and posing some questions. Assumptions In most cases, the switch manufacturer makes small switches (32 ports and below) and large switches (64 ports and above). Small switches cost less per port than large switches; it is harder to design a big switch than a small switch. But keep in mind that you lose ports when connecting switches together. Most SAN switches today are 2G bit/sec Questions How large a switch makes sense? How large a SAN can or should I build from small switches? For a small SAN, should I use a single large switch rather than a collection of small switches? For a large SAN, why would I use a core-edge design rather than core switches only? There are many different topologies that can be used for interconnecting switches. For this discussion, let's assume we have a SAN contained within one data center. How big a switch? Let us remember that SAN stands for storage area networking. If SANs make sense, than bigger SANs (up to a point) make more

About Switches

Embed Size (px)

DESCRIPTION

switches

Citation preview

optionshttp://searchstorage.techtarget.com/news/1000886/SAN-zoning-resource-guideSAN topologies, part 1: Know your switchMost typical switch vendors can sell you switches with 8, 16, 32 and 64 ports. Sometimes you'll find other mid-size switches -- 24 or 48 ports -- and other times you'll find larger switches -- 128, 140 and 256 ports. So let's start by making some assumptions and posing some questions.Assumptions In most cases, the switch manufacturer makes small switches (32 ports and below) and large switches (64 ports and above). Small switches cost less per port than large switches; it is harder to design a big switch than a small switch. But keep in mind that you lose ports when connecting switches together. Most SAN switches today are 2G bit/secQuestions How large a switch makes sense? How large a SAN can or should I build from small switches? For a small SAN, should I use a single large switch rather than a collection of small switches? For a large SAN, why would I use a core-edge design rather than core switches only?There are many different topologies that can be used for interconnecting switches. For this discussion, let's assume we have a SAN contained within one data center. How big a switch?Let us remember that SAN stands for storage area networking. If SANs make sense, than bigger SANs (up to a point) make more sense. Therefore, no matter what size box we buy, at some point we will start networking them together. This poses a challenge: A fatter pipe is needed to connect to large SAN boxes. Let's do some math. One basic assumption when designing SANs is that we are consolidating storage; we are sharing a disk array with multiple servers and sharing each port of the disk array to multiple servers. Typically, three to six servers per port on the disk array is considered reasonable. For most servers, traffic I/O's are more important than bandwidth, and it is not that often we exceed 20% to 30% bandwidth on a server HBA. When connecting 1g switches together, the 8-, 16-, and 32- port switches can be readily networked without hitting performance problems and without having to worry about which devices are talking together. When we get to 1g 64-port switches, it becomes very hard to design a SAN with significant amounts of data moving switch-to-switch. So I would suggest at 1g, no more than 32 ports makes sense. What about 2g switches? We have to start from a simple fact -- a server with a 2g HBA does not give me two times the bandwidth of a 1g HBA. It will be a bit faster, it will give more I/O's per second, but this can be attributed to the HBA being newer and more intelligent. Having made this assumption, we now find that we can interconnect switches of 64 and maybe 128 ports in a reasonable fashion using 2g ISLs, particularly if we have good load balancing or trunking. However, even at 128 ports, we have to start thinking about localizing traffic. Any larger and life gets very difficult. Why not make my switches even bigger? Things start to get really tough! The bigger the switch the more it will cost per port, assuming that it is a true single switch with non-congesting performance for any-to-any-port connectivity with all the ports running full speed. You certainly can argue that we do not need such a switch design. Afterall, servers do not/can not actually use all the bandwidth of 1g let alone 2g ports. By definition, we are over-subscribing connections on the disk arrays, and so on. If you look at the IP network world, we know not all switches are equal. We choose whether or not to pay more for a switch that runs more I/O's per second. In addition, there is a limit to how large most people are comfortable taking a single fabric. Without some way of splitting a SAN into separate subnets for manageability, we find that big SANs can be challenging: They can be difficult to manage with the current state of management tools, they can lead to scalability concerns, etc. Is the limit 500 ports? 100 ports? It's hard to say. My favorite point though is cabling. Unless you are lucky and have lots of structured optical cabling throughout your data center, then having one big switch in the middle of the room can be a cabling nightmare. Whereas, having a number of switches in different locations in the data center, we can consolidate the cabling, reduce cable complexity and have a more usable physical environment. While we talk about heterogeneous SANs, there can be advantages to some level of homogeneous design, such as having all the Microsoft servers connected to one switch, all the UNIX servers to another. In my next tip, I will discuss how to build a network using different SAN design topologies.SAN topologies, part 2: How to design your SAN

Part 2 discusses how to design your network using different SAN topologies.So, you've determined how big a switch you need. Now to decide what topology to use when designing your SAN.MeshIf you've chosen just small switches, the full mesh is the simplest topology to use, understand and manage. In this case, every switch is connected to every other switch. In reality, while a network provides any-to-any connectivity, not every device in a network needs good connectivity to every other device. However, building a full mesh where there is good any-to-any connectivity does mean that you can pretty much ignore locality. You can just plug any device in anywhere knowing that there is enough bandwidth. The downside is that a full mesh is only really practical for up to four or five switches. Otherwise, all of the ports are used for interswitch links with no user ports. Using 16-port switches you could build a 17-switch full mesh that would look very nice and have no free ports. In this case, a full mesh will typically take up to 40 to 50 user ports, which is good because 32- and 64-port switches are available if you need to go large. Core-edgeIf you want or need to build a larger SAN with small switches, then the most common design would be core-edge. Typically, you would start with two core switches and connect edge switches to both core switches. Depending on your bandwidth requirements, using 16-port switches you can connect 16 edge switches together off the core creating 200 usable ports. The main design issue with this approach is that the core is purely used to interconnect the edge. This means that the servers and storage are all connected to the edge and, typically, we start to consider localization. In a design like this, if we had a large disk array with say 16 ports, eight of those ports would be connected to this fabric while the other eight ports would be connected to a separate fabric. This means one port is connected to each of eight switches. So when allocating storage, we are looking at which disk array ports have spare bandwidth and I/O capacity as well as which port is connected to the switch with the server that needs the space. Similarly, if we are using smaller arrays with only a few ports each, then we hope the disk array on the same switch as the server needing storage has bandwidth, I/O capacity and spare space. I do not think we should focus too much on localization. In reality, this sort of core-edge design has, worst case, two ISLs and three switches between server and storage, and has fairly good end-to-end bandwidth. However, if we build a core-edge SAN using large switches at the core and small switches at the edge, we get two big advantages over a core-edge design using small switches. First, we can easily build the SAN out to 500, 1000 or more ports. Second, if we put servers at the edge and storage at the center, then the environment becomes very easy to manage and understand. No matter how we allocate storage to the servers, the traffic always goes through exactly two switches and one ISL. Assuming we have a sensible number of ISLs from each switch (easy to do with 2g), then we have ample bandwidth. Therefore, allocating storage is simple and localization is not a consideration. Cable considerationsLooking at cable consolidation, as I discussed in Part 1, most data centers have racks of servers. So a 42u rack might have some 20 2u Windows NT servers each with two Fibre Channel HBAs. Or, it might have just one or two high-end servers, each with five or 10 HBAs. Either way, an edge switch in the server rack consolidates the cabling back to the core and the storage. But why, I hear you all ask, do we not just build a SAN from multiple large switches only? For one -- cabling. Most data centers have server racks and storage racks. This means that a core-edge design, using small switches for the servers connecting back to large switches, simply makes cabling easier in your average data center. Unless of course you already have massive amounts of structured optical cabling in place. Cost considerationsAnother consideration is cost. Small switches cost less per port, so a core-edge design may well reduce the average-cost-per-user port. Depending on your environment, this may be more or less critical. In a Wintel environment, the cost of a Fibre Channel port as a proportion of the cost of the server may be quite high, whereas in a Unix environment this may be less of an issue. Then, of course, you could reuse small switches that have been purchased over the last few years. Even if you are starting to deploy SANs now, you may feel you want to start with smaller switches to dip your toe in the water. That being said, there are still some cases where I see SANs constructed using only large switches. SummaryThere is no one topology or approach that applies to everyone. Always keep in mind that this is a network. It will grow over time. So when choosing a small or large switch, and a topology, think about the long-term implications. In any environment, you already have a lot of servers and storage, you probably know what types of servers you will typically purchase in the future and you can probably make a good guess as to what systems would make sense to incorporate in the SAN. Knowing these parts, you can fairly easily build out your SAN.

In a storage area network (SAN), zoning is the allocation of resources for device load balancing and for selectively allowing access to data only to certain users. Essentially, zoning allows an administrator to control who can see what in a SAN.Zoning is done using a structure similar to that of a computer file system. A zone is the equivalent of a folder or directory. Zoning can be either hard or soft. In hard zoning, each device is assigned to a particular zone, and this assignment does not change. In soft zoning, device assignments can be changed by the network administrator to accommodate variations in the demands on different servers in the network. The user of zoning is said to minimize the risk of data corruption, help secure data against hackers, slow the spread of viruses and worms, and minimize the time necessary for servers to reboot. However, zoning can complicate the scaling process if the number of users and servers in a SAN increases significantly in a short period of time.

zoningIn a storage area network (SAN), zoning is the allocation of resources for device load balancing and for selectively allowing access to data only to certain users. Essentially, zoning allows an administrator to control who can see what in a SAN.Zoning is done using a structure similar to that of a computer file system. A zone is the equivalent of a folder or directory. Zoning can be either hard or soft. In hard zoning, each device is assigned to a particular zone, and this assignment does not change. In soft zoning, device assignments can be changed by the network administrator to accommodate variations in the demands on different servers in the network. The user of zoning is said to minimize the risk of data corruption, help secure data against hackers, slow the spread of viruses and worms, and minimize the time necessary for servers to reboot. However, zoning can complicate the scaling process if the number of users and servers in a SAN increases significantly in a short period of time.Zoning 101: Why zone?Zoning can offer a number of benefits for your SAN. Read about the advantages of zoning, as well as an explanation of the different methods. (This tip is part of our Storage 101 tip series.)is one of the most common tools for managing and securing a SAN. It provides an easy method to limit which groups of users can connect with which storage volumes, as well as matching operating systems (OS) with their storage.Depending on how it is done, zoning can offer a number of benefits: Security. Zoning keeps users from accessing information they don't need. Manageability. By splitting the SAN up into chunks, zoning makes it easer to keep track of devices, storage and users. Separation by purpose. Setting up zones to reflect operational categories, such as engineering or human resources, organizes storage logically. It also makes it easy to establish specialized networks for testing or other purposes. Separation by operating system. Putting different OSs in different zones reduces the possibility of data corruption. Allowing temporary access. Administrators can remove the zone restrictions temporarily to allow tasks such as nightly backup. The key phrase in all this is '"depending on how it is done."' There are several different methods of zoning and not all of them can effectively do all those jobs. The tradeoff is usually between security and everything else.The two most common methods of zoning are name server, or "soft" zoning, and port, or "hard" zoning. Name server zoning partitions zones based on the World Wide Name (WWN) of devices on the SAN. It is the easiest to set up and the most flexible, but it is the least secure. Port zoning allows devices attached to particular ports on the switch to communicate only with devices attached to other ports in the same zone. The SAN switch keeps a table indicating which ports are allowed to communicate with each other. The easy way to think of the difference is to picture soft zoning as a telephone directory and hard zoning as call blocking. Soft zoning won't tell you the port number for any device outside your zone, but it won't prevent you from sending packets to any port on the SAN. Hard zoning won't let you communicate with any port not on the "approved" list. Hard zoning is more secure, but it creates a number of problems because it limits the flow of data to connections between specific ports on the fabric. The type of zoning that will work best on your SAN depends on the characteristics of the SAN. For example, if you expect to be switching cables frequently for load balancing or troubleshooting, soft zoning is more convenient because such switching won't disrupt the SAN. If security is paramount, you probably want hard zoning.Fibre Channel SAN zoning: Pros and cons of WWN zoning and port zoning

By Fibre Channel SAN zoning, we mean partitioning Fibre Channel fabric into groups to add security and improve management. Fibre Channel SANs can be zoned according to the World Wide Name (WWN) of each device, or according to switch ports.WWN zoning groups a number of WWNs in a storage-area network zone and allows them to communicate with each other. The switch port that each device is connected to is irrelevant when WWN zoning is configured. An advantage to this type of zoning is that if a port is suspected of being faulty, another port can be used without the need for fabric reconfiguration. A disadvantage with WWN zoning is that if there's a host bus adapter (HBA) failure, the fabric will need to be reconfigured for the host to reconnect to its storage. WWN zoning is sometimes referred to as soft zoning. Port zoning groups particular switch ports together to allow any device connected to those ports to communicate with each other. The advantages and disadvantages of port zoning are the opposite of those for WWN zoning. I don't believe either type of zoning is superior to the other, so the type of zoning used is often determined by what a particular consultant or organisation has used in the past. Hard zoning vs. soft zoningHard and soft zoning are used for controlling access and implementing a form of security so that all devices in a Fibre Channel switched (FC-SW) SAN fabric do not have to see each other. Hard and soft zoning are further differentiated by name and port zoning. Soft zoning controls what devices are accessible to each other by using the Fibre Channel name service where hard zoning restricts communication across a fabric.Zoning can be implemented either by switch port to block access to un-authorized ports where a name-based zone restricts by world wide name (WWN). Use WWN zones if you need to move devices, for example tape drives, to other locations in a fabric. For stronger security, port zones can be used if you can trust what is attached to the port, however avoid WWN spoofing. Refer to vendor (Brocade, Qlogic, Cisco, McData) documentation and their recommendations on zoning, their specific implementation and interoperability.