Sunday, July 31, 2011


The SAN components interact as follows:

1. When a host wants to access a storage device on the SAN, it sends out a
block‐based access request for the storage device.

2 .SCSI commands are encapsulated into FC packets. The request is accepted by the
HBA for that host and is converted from its binary data form to the optical form
required for transmission on the fiber optic cable.

3. At the same time, the request is packaged according to the rules of the FC protocol.

4 .The HBA transmits the request to the SAN.

5 .Depending on which port is used by the HBA to connect to the fabric, one of the
SAN switches receives the request and sends it to the storage processor, which
sends it on to the storage device.
The remaining sections of this white paper provide additional information about the



SAN Components:

The components of an FC SAN can be grouped as follows and are discussed below:

1.Host Components
2.Fabric Components
3.Storage Components


Host Components

The host components of a SAN consist of the servers themselves and the components
that enable the servers to be physically connected to the SAN.
HBAs are located in the servers, along with a component that performs
digital‐to‐optical signal conversion. Each host connects to the fabric ports through
its HBAs.
HBA drivers running on the servers enable the servers’ operating systems to
communicate with the HBA.

Fabric Components

All hosts connect to the storage devices on the SAN through the SAN fabric. The
network portion of the SAN consists of the following fabric components:
SAN Switches — SAN switches can connect to servers, storage devices, and other
switches, and thus provide the connection points for the SAN fabric. The type of
SAN switch, its design features, and its port capacity all contribute to its overall
capacity, performance, and fault tolerance. The number of switches, types of
switches, and manner in which the switches are interconnected define the fabric
topology.

for smaller SANs, the standard SAN switches (called modular switches) can
typically support 16 or 24 ports (though some 32‐port modular switches are
becoming available). Sometimes modular switches are interconnected to
create a fault‐tolerant fabric.

 For larger SAN fabrics, director‐class switches provide a larger port capacity
(64 to 128 ports per switch) and built‐in fault tolerance.
  Data Routers — Data routers are intelligent bridges between SCSI devices and FC
devices in the SAN. Servers in the SAN can access SCSI disk or tape devices in the
SAN through the data routers in the fabric layer.
  Cables — SAN cables are usually special fiber optic cables that are used to connect
all of the fabric components. The type of SAN cable and the fiber optic signal
determine the maximum distances between SAN components and contribute to
the total bandwidth rating of the SAN.
! Communications Protocol — Fabric components communicate using the FC
communications protocol. FC is the storage interface protocol used for most of
today’s SANs. FC was developed as a protocol for transferring data between two
ports on a serial I/O bus cable at high speeds. FC supports point‐to‐point,
arbitrated loop, and switched fabric topologies. Switched fabric topology is the
basis for most current SANs.
Storage Components
The storage components of a SAN are the storage arrays. Storage arrays include storage
processors (SPs). The SPs are the front end of the storage array. SPs communicate with
the disk array (which includes all the disks in the storage array) and provide the
RAID/LUN functionality.
Storage Processors
SPs provide front‐side host attachments to the storage devices from the servers, either
directly or through a switch. The server HBAs must conform to the protocol supported
by the storage processor. In most cases, this is the FC protocol.



Storage Devices
  
Data is stored on disk arrays or tape devices (or both).
Disk arrays are groups of multiple disk devices and are the typical SAN disk storage
device. They can vary greatly in design, capacity, performance, and other features.
Storage arrays rarely provide hosts direct access to individual drives. Instead, the
storage array uses RAID (Redundant Array of Independent Drives) technology to
group a set of drives. RAID uses independent drives to provide capacity, performance,
and redundancy. Using specialized algorithms, several drives are grouped to provide
common pooled storage. These RAID algorithms, commonly known as RAID levels,
define the characteristics of the particular grouping.
In simple systems that provide RAID capability, a RAID group is equivalent to a single

LUN. A LUN is a single unit of storage. Depending on the host system environment, a
LUN is also known as a volume or a logical drive. From a VI Client, a LUN looks like many other storage unit available for access.
In advanced storage arrays, RAID groups can have one or more LUNs created for
access by one or more servers. The ability to create more than one LUN from a single
RAID group provides fine granularity to the storage creation process. You are not
limited to the total capacity of the entire RAID group for a single LUN.
Most storage arrays provide additional data protection and replication features such as
snapshots, internal copies, and remote mirroring.
  •   A snapshot is a point‐in‐time copy of a LUN. Snapshots are used as backup sources for the overall backup procedures defined for the storage array.

  •   Internal copies allow data movement from one LUN to another for an additional copy for testing.

  •  Remote mirroring provides constant synchronization between LUNs on one

storage array and a second, independent (usually remote) storage array for
disaster recovery.


Tape Storage Devices
  
Tape storage devices are part of the SAN backup capabilities and processes.

  • Smaller SANs might use high‐capacity tape drives. These tape drives vary in their transfer rates and storage capacities.
  •  A high‐capacity tape drive might exist as astandalone drive, or it might be part of a tape library.

  • Typically, a large SAN, or a SAN with critical backup requirements, is configured with one or more tape libraries. A tape library consolidates one or more tape drives into a single enclosure. Tapes can be inserted and removed from the tape drives in the library automatically with a robotic arm. Many tape libraries offer large storage.



SAN Ports and Port Naming

A port is the connection from a device into the
SAN. Each node in the SAN  each host, storage device, and fabric component
(router or switch)—has one or more ports that connect it to the SAN.

 Ports can be identified in a number of ways:

 1.WWPN — World Wide Port Name. A globally unique identifier for a port which
allows certain applications to access the port. The FC switches discover the WWPN of a device or host and assign a port address to the device.

2. Port_ID (or port address) — Within the SAN, each port has a unique port ID that
serves as the FC address for the port. This enables routing of data through the SAN
to that port. 
The FC switches assign the port ID when the device logs into the fabric. The port ID is valid only while the device is logged on.



Indepth information on SAN ports can be found at http://www.snia.org, the Web site
of the Storage Networking Industry Association.

Multipathing and Path Failover

An FC path describes a route:

From a specific HBA port in the host,Through the switches in the fabric, and Into a specific storage port on the storage array.

A given host might be able to access a LUN on a storage array through more than one
path. Having more than one path from a host to a LUN is called multipathing.

By default, VMware ESX Server systems use only one path from the host to a given LUN at any time.
 If the path actively being used by the VMware ESX Server system fails, the server selects another of the available paths. 
The process of detecting a failed path and switching to another is called path failover
A path fails if any of the components along the path.
HBA, cable, switch port, or storage processor. fails.

Active/Active and Active/Passive Disk Arrays

It is useful to distinguish between active/active and active/passive disk arrays:
 An active/active disk array: allows access to the LUNs simultaneously through all the storage processors that are available without significant performance degradation.
 All the paths are active at all times (unless a path fails).

 In an active/passive disk array, one SP is actively servicing a given LUN. The other SP acts as backup for the LUN and may be actively servicing other LUN I/O. 

I/O can be sent only to an active processor. If the primary storage processor fails, one of the secondary storage processors becomes active, either automatically or
through administrator intervention.

Zoning

Zoning provides access control in the SAN topology; it defines which HBAs can connect to which SPs. 
You can have multiple ports to the same SP in different zones to reduce the number of presented paths.
When a SAN is configured using zoning, the devices outside a zone are not visible to the devices inside the zone. 
In addition, SAN traffic within each zone is isolated from
the other zones.
Within a complex SAN environment, SAN switches provide zoning.
 Zoning defines and configures the necessary security and access rights for the entire SAN.
Typically, zones are created for each group of servers that access a shared group of storage devices and LUNs. You can use zoning in several ways. Here are some examples:

 1.Zoning for security and isolation . You can manage zones defined for testing
independently within the SAN so they don.t interfere with the activity going on in
the production zones. Similarly, you could set up different zones for different departments.

 Zoning for shared services . Another use of zones is to allow common server access for backups. 
SAN designs often have a backup server with tape services that require SANwide access to host servers individually for backup and recovery processes. 
These backup servers need to be able to access the servers they back up.
A SAN zone might be defined for the backup server to access a particular host to perform a backup or recovery process. The zone is then redefined for access to another host when the backup server is ready to perform backup or recovery
processes on that host.

3. Multiple storage arrays . Zones are also useful when there are multiple storage arrays. 
Through the use of separate zones, each storage array is managed separately from the others, with no concern for access conflicts between servers.

LUN Masking

LUN masking is commonly used for permission management. LUN masking is also referred to as selective storage presentation, access control, and partitioning, depending on the vendor.
LUN masking is performed at the SP or server level; it makes a LUN invisible when a target is scanned.
 The administrator configures the disk array so each server or group SAN Conceptual and Design Basics
VMware, 













Inc. 10 of servers can see only certain LUNs. Masking capabilities for each disk array are vendor specific, as are the tools for managing LUN masking.

SAN Installation Considerations

Installing a SAN requires careful attention to details and an overall plan that addresses all the hardware, software, storage, and applications issues and their interactions as all
the pieces are integrated.

Requirements

To integrate all co mponents of the SAN, you must meet the vendor.s hardware and software compatibility requirements, including the following:

 HBA (firmware version, driver version, and patch list) Switch (firmware) Storage (firmware, host personality firmware, and patch list)


zoning: done at switch level used to segment the fabric        LUNs
LUN masking:done at SP or server level;
makes a LUN invisible when a target is scanned
storage array.
SP
0 11 12
FC switch
HBA HBA

Mask LUN 11
21:00:00:E0:8B:19:AB:31 21:00:00:E0:8B:19:B2:33
50:05:01:60:10:20:AD:87
Mask LUN 12

WWN
(world-wide name)
unique, 64-bit address assigned to Fibre Channel node
SAN Conceptual and Design Basics.

SAN Setup

When you are ready to set up the SAN, complete these tasks.

To prepare the SAN

1 Assemble and cable together all hardware components and install the
corresponding software.
a Check the versions.
b Set up the HBA.
c Set up the storage array.

2 .Change any configuration settings that might be required.

3. Test the integration.
During integration testing, test all the operational processes for the SAN environment.
 These include normal production processing, failure mode testing, backup functions, and so forth.
4 .Establish a baseline of performance for each component and for the entire SAN.
Each baseline provides a measurement metric for future changes and tuning. See
ESX Server SAN Configuration Guide for additional information.
5 .Document the SAN installation and all operational procedures.

SAN Design Basics
When designing a SAN for multiple applications and servers, you must balance the performance, reliability, and capacity attributes of the SAN.
 Each application demands resources and access to storage provided by the SAN. 
The SANs switches and storage must provide timely and reliable access for all competing applications.
This section discusses some SAN design basics. It does not focus on SAN design for ESX Server hosts  .

Saturday, July 30, 2011

 When it's a Storage Server?

Ask people what a storage server is, and you can expect to hear a variety of answers. Some will say it is a regular server with added features, a few describe it as a stripped-down box dedicated to a specialized function, and still others believe the term refers only to a network attached storage (NAS) box. This article will attempt to define a storage server, differentiate it from a regular server, and give examples of storage servers on the mark
et today.



 







Not Your Average Server

The typical server is configured to perform multiple functions. It operates as a file,print, application, Web, or miscellaneous server. As such, it must have fast chips, more RAM, and plenty of internal disk space to cope with whatever end users decide to do with it. Not so with a storage server. It is designed for a specific purpose, and thus configured differently. It may come with a little extra storage or a great deal more storage. A general-purpose server typically has five or fewer disks inside. A storage server, on the other hand, has at least six, and more, usually 12 to 24 disks.
Storage servers are normally individual units. Sometimes they are built into a 4Urackmount. Alternatively, they can consist of two boxes: a storage unit and a server located near by. Both boxes can then be placed side-by-side in a rack. The Sun StorEdge 3120 storage unit and SunFire X4100 server, for example, can be combined into a storage server and placed in a rack.
Apart from extra disks, what else is different about storage servers? In many cases, they come with a host of specialized services. This can include storage management software, extra hardware for higher resilience, a range of RAIDconfigurations and extra network connections to enable more users to be desktops to be connected to it.