LUN (Logical Unit Number) Masking is an authorization process that makes a LUN available to some hosts and unavailable to other hosts.
LUN Masking is implemented primarily at the HBA (Host Bus Adapter) level. LUN Masking implemented at this level is vulnerable to any attack that compromises the HBA. Some storage controllers also support LUN Masking.
LUN Masking is important because Windows based servers attempt to write volume labels to all available LUN’s. This can render the LUN’s unusable by other operating systems and can result in data loss.
Device masking lets you control your host HBA access to certain storage arrays devices. A device masking database, based in the storage arrays unit, eliminates conflicts through centralized monitoring and access records. Both HBA and storage arrays director ports in their Channel topology are uniquely identified by a 64-bit World Wide Name (WWN). For ease of use, you can associate an ASCII World Wide Name (AWWN) with each WWN.
For Emulex HBA on a Solaris host for setting up persistent binding:
Using option 5 will perform a manual persistent binding and the file is: /kernel/drv/lpfc.conf file.
lpfc.conf file looks like:
sd.conf file looks like:
name=”sd” parent=”lpfc” target=1 lun=0;
name=”sd” parent=”lpfc” target=2 lun=0;
# touch /reconfigure
# shutdown -y -g0 -i6
Logical Unit Number or LUN is a logical reference to entire physical disk, or a subset of a larger physical disk or disk volume or portion of a storage subsystem.
WWN zoning uses name servers in the switches to either allow or block access to particular World Wide Names (WWNs) in the fabric. A major advantage of WWN zoning is the ability to recable the fabric without having to redo the zone information. WWN zoning is susceptible to unauthorized access, as the zone can be bypassed if an attacker is able to spoof the World Wide Name of an authorized HBA.
Port zoning utilizes physical ports to define security zones. A user’s access to data is determined by what physical port he or she is connected to. With port zoning, zone information must be updated every time a user changes switch ports. In addition, port zoning does not allow zones to overlap. Port zoning is normally implemented using hard zoning, but could also be implemented using soft zoning.
The device masking commands allow you to:
Assign and mask access privileges of hosts and adapters
Connected in a Fibre Channel topology to storage arrays and devices.
Specify the host bus adapters (HBAs) through which a host can access storage arrays devices.
Display or list device masking objects and their relationships: Typical objects are hosts, HBAs, storage arrays devices, and Fibre Channel Adapter (FA) ports.
Modify properties, such as names and access privileges associated with device masking objects (for example, change the Name of a host).
Select it on the basis of transmission distance.
If the distance is less than a couple of miles, I will use multimode fibre cable.
If the distance is more than 3-5 miles, I will use single mode fibre cable.
Raw Capacity= Usable + Parity
The required Bandwidth=the required bandwidth is determined by measuring the average number of write operations and the average size of write operations over a period of time.
SSD drives have no movable parts and therefore have no RPM.
To calculate IOPS per drive the formula I will use is:
1000 / (Seek Time + Latency) = IOPS
GUID Partition Table, GPT is a part of the EFI standard that defines the layout of the partition table on a hard drive. GPT provides redundancy by writing the GPT header and partition table at the beginning of the disk and also at the end of the disk.
GPT Uses 64-bit LBA for storing Sector numbers. GPT disk can theoretically support up to 2^64 LBAs. Assuming 512 byte sector emulation, maximum capacity of a GPT disk = 9.4 x 10^21 bytes = 9.4 zettabytes (ZB)
SAN Storage array has data integrity built into it.
A storage array uses spae disk drives to take the place of any disk drives that are blocked because of errors. Hot spares are available and will spare out predictively when a drive fails.
There are two types of disk sparing:
Dynamic Sparing: Data from the failed or blocked drive is copied directly to the new spare drive from the failing drive
Correction Copy: Data is regenerated from the remaining good drives in the parity group. For RAID 6, RAID 5, and RAID 1, after a failed disk has been replaced, the data is copied back to its original location, and the spare disk is then available.
Design should address three separate levels:
Drooping= Bandwidth Inefficiency
Drooping begins if: BB_Credit<RTT/SF
Where RTT = Round Trip Time
SF = Serialization delay for a data frame
use: Fan Out
For example 10:1.
I will determine this ratio, based on the server platform and performance requirement by consulting Storage vendors
I have used Brocade SAN and it has these load balancing policies:
EMC Related Tutorials
|Power Electronics Tutorial|
EMC Related Interview Questions
|Power Electronics Interview Questions||Automobile Engineering Interview Questions|
|Netapps Interview Questions||Automatic Storage Management (ASM) Interview Questions|
|Storage Area Network Interview Questions|