- What is a SAN switch? …
- Can you explain what a loop-free topology is and why it’s important? …
- What are some advantages of using Fibre Channel switches over Ethernet switches in the context of storage area networks? …
- What is an FC port?
Top 25 Switching Interview Questions and Answers for 2022
Brocade SAN Switch Interview Questions Theory
Fabric is a group of SAN switches in a storage area network that are connected to each other.
What is the Principal switch?
The principal switch is also called a core switch which assigns domain ID to the devices that connect to it. The principal switch is also responsible for syncing time among all switches in a Fabric.
~ Keeping up with th Storage Area Network Search:
[Hard Disk]
Explain the inner structure or layout of a Hard disk?
JBOD [Just a Bunch of Disks]
IDE : Integrated Device Electronics
ATA : Advanced Technology Attachment
SATA: Serial ATA
SCSI: Small Computer System Interface
SAS : Serial attached SCSI
FC : Fibre Channel
ISCSI: SCSI over IP
[MAGNETIC TAPE]
[STORAGE OPERATING SYSTEMS]
[SCSI]
-Test unit ready:
-Inquiry:
-Request sense:
-Start/Stop unit:
-Read capacity:
-Log sense:
-Mode sense:
-Mode select:
[SAN Related Interview Questions]
Redundant Array of Independent Drives (or Disks), also known as Redundant Array of Inexpensive Drives (or Disks) (RAID) is an important term for data storage schemes that divide and/or replicate data among multiple hard drives. They offer, depending on the scheme, increased data reliability and/or throughput.
RAID is a way of storing the same data in different drives (thus, redundantly) on multiple hard disks.
Higher READ/Write performance in some RAID levels
* Higher Data Security: Through the use of redundancy, most RAID levels provide protection for the data stored on the array. This means that the data on the array can withstand even the complete failure of one hard disk (or sometimes more) without any data loss, and without requiring any data to be restored from backup. This security feature is a key benefit of RAID and probably the aspect that drives the creation of more RAID arrays than any other. All RAID levels provide some degree of data protection, depending on the exact implementation, except RAID level 0.
* Fault Tolerance: RAID implementations that include redundancy provide a much more reliable overall storage subsystem than can be achieved by a single disk. This means there is a lower chance of the storage subsystem as a whole failing due to hardware failures. (At the same time though, the added hardware used in RAID means the chances of having a hardware problem of some sort with an individual component, even if it doesn’t take down the storage subsystem, is increased; see this full discussion of RAID reliability for more.)
* Improved Availability: Availability refers to access to data. Good RAID systems improve availability both by providing fault tolerance and by providing special features that allow for recovery from hardware faults without disruption. See the discussion of RAID reliability and also this discussion of advanced RAID features.
* Increased, Integrated Capacity: By turning a number of smaller drives into a larger array, you add their capacity together (though a percentage of total capacity is lost to overhead or redundancy in most implementations). This facilitates applications that require large amounts of contiguous disk space, and also makes disk space management simpler. Let’s suppose you need 300 GB of space for a large database. Unfortunately, no hard disk manufacturer makes a drive nearly that large. You could put five 72 GB drives into the system, but then you’d have to find some way to split the database into five pieces, and you’d be stuck with trying to remember what was were. Instead, you could set up a RAID 0 array containing those five 72 GB hard disks; this will appear to the operating system as a single, 360 GB hard disk! All RAID implementations provide this “combining” benefit, though the ones that include redundancy of course “waste” some of the space on that redundant information.
* Improved Performance: Last, but certainly not least, RAID systems improve performance by allowing the controller to exploit the capabilities of multiple hard disks to get around performance-limiting mechanical issues that plague individual hard disks. Different RAID implementations improve performance in different ways and to different degrees, but all improve it in some way. See this full discussion of RAID performance issues for more.
There are many levels like
RAID 0,RAID 1,RAID 2,RAID 3,RAID 4,RAID 5,RAID 10,RAID 01,RAID 50,RAID 6
But popular are RAID 0, RAID 1,RAID 5,RAID 10,RAID 01,RAID 50,RAID 6
RAID 0:
The lowest designated level of RAID, level 0, is actually not a valid type of RAID. It was given the designation of level 0 because it fails to provide any level of redundancy for the data stored in the array. Thus, if one of the drives fails, all the data is damaged.
RAID 0 uses a method called striping. Striping takes a single chunk of data like a graphic , and spreads that data across multiple drives. The advantage that striping has is in improved performance. Twice the amount of data can be written in a given time frame to the two drives compared to that same data being written to a single drive.
RAID version 1 was the first real implementation of RAID. It provides a simple form of redundancy for data through a process called mirroring. This form typically requires two individual drives of similar capacity. One drive is the active drive and the secondary drive is the mirror. When data is written to the active drive, the same data is written to the mirror drive.
This is the most powerful form of RAID that can be found in a desktop computer system. Typically it requires the form of a hardware controller card to manage the array, but some desktop operating systems can create these via software. This method uses a form of striping with parity to maintain data redundancy. A minimum of three drives is required to build a RAID 5 array and they should be identical drives for the best performance.
This is a hybrid form of RAID that some manufacturers have implemented to try and give the advantages of each of the two versions combined. Typically this can only be done on a system with a minimum of 4 hard drives. It then combines the methods of mirroring and striping to provide the performance and redundancy. The first set of drives will be active and have the data striped across them while the second set of drives will be a mirror of the data on the first two.
RAID 10 is effectively a similar version to RAID 0+1. Rather than striping data between the disk sets and then mirroring them, the first two drives in the set are a mirrored together. The second two drives form another set of disks that is are mirror of one another but store striped data with the first pair. This is a form of nested RAID setup. Drives 1 and 2 are a RAID 1 mirror and drives 3 and 4 are also a mirror. These two sets are then setup as stripped array.
RAID1 : Minimum 2 drives are required . Gives only 50% disk space.
RAID5 : Minimum 3 drives are required . Gives only (n-1)X Capacity where n is the no. of disks, disk space.
RAID 3 and RAID 4: Striped Set (3 disk minimum) with Dedicated Parity, the parity bits represent a memory location each, they have a value of 0 or 1, whether the given memory location is empty or full, thus enhancing the speed of read and write. : Provides improved performance and fault tolerance similar to RAID 5, but with a dedicated parity disk rather than rotated parity stripes. The single disk is a bottle-neck for writing since every write requires updating the parity data. One minor benefit is the dedicated parity disk allows the parity drive to fail and operation will continue without parity or performance penalty.
RAID 5 does not have a dedicated parity drive but the parity is rotated across all the drives hence the parity is distributed.
RAID 5: Striped Set (3 disk minimum) with Distributed Parity: Distributed parity requires all but one drive to be present to operate; drive failure requires replacement, but the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. The array will have data loss in the event of a second drive failure and is vulnerable until the data that was on the failed drive is rebuilt onto a replacement drive.
RAID 0+1: Striped Set + Mirrored Set (4 disk minimum; Even number of disks) provides fault tolerance and improved performance but increases complexity. Array continues to operate with one failed drive. The key difference from RAID 1+0 is that RAID 0+1 creates a second striped set to mirror a primary striped set, and as a result can only sustain a maximum of a single disk loss, whereas 1+0 can sustain multiple drive losses as long as no two drive loss comprise a single pair.
RAID 1+0: Mirrored Set + Striped Set (4 disk minimum; Even number of disks) provides fault tolerance and improved performance but increases complexity. Array continues to operate with one or more failed drives. The key difference from RAID 0+1 is that RAID 1+0 creates a striped set from a series of mirrored drives.
R0: Minimum 1
R1: Minimum 2
R5: Minimum 3
R10: Minimum 4
R01: Minimum 4
The parity calculation is typically performed using a logical operation called “exclusive OR” or “XOR”. As you may know, the “OR” logical operator is “true” (1) if either of its operands is true, and false (0) if neither is true. The exclusive OR operator is “true” if and only if one of its operands is true; it differs from “OR” in that if both operands are true, “XOR” is false.
Initialization is the process of preparing a drive for storage use. It erases all data on the drive & makes way for new file system creation.
Consistency check or CC verifies correctness of data in logical drives. This is a feature of some of the RAID hardware controller cards.
This is a Consistency check process forced when a new logical drive is created. This is an automatic operation that starts 5 minutes after the new logical drive is created.
RAID array is a group of disks which are configured with RAID. That means they are in a redundant setup to tolerate any disk failures.
Just A Bunch Of Disks (JBOD) – hard disks that aren’t configured in a RAID configuration. They are just disks piled or connected in one single enclosure.
RAID is having the advantage of bearing a disk failure & still give data availability.
When there is no need for redundancy & when it is ok if there is some hard disk failure or data unavailability in such scenarios JBOD is preferred over RAID because JBOD is inexpensive storage solution. It is also easy to setup & start using compared to RAID.
Hot spare is an extra,unused disk drive that is part of the disk subsystem. It is usually in standby mode ready for service if a drive fails. Whenever there is a drive failure this hotspare kicks in & takes over that failed drive’s role.
The partitioning or division of a large hard drive into smaller units. A single, large Physical Drive can be partitioned into two or more smaller Logical Drives.
Whenever there is a disk failure in the RAID array the array goes to DOWNGRADED STATE. SO when we plug out the failed drive & insert a new functioning drive the RAID configured array starts regenerating the data to the newer drive. This process is called rebuilding.
We swap out failed drive & plugin new functioning drive & wait for the rebuilding process to complete. We make sure rebuild process happens without any error. Once that completes array is back to optimal online state.
Online – when all drives are working fine
Downgraded – Whenever there is a drive failure but still the array is functioning fine
Offline – Array or whole data storage is down
Rebuilding – Storage access is there but since a new drive has been inserted in place of a failed drive data is being written to new drive which might slow down the performance of the whole RAID array.
Global hotpsare is available for the any array in the whole enclosure or Storage subsystem.
If there is an enclosure having 10 drives & we have 3 drives in RAID5(1st array) , 3 more drives in second RAID5(2nd array) & 2 more drives in RAID 1 config. We can specify in RAID config utility whether a dedicated hotspare is assigned for 1st RAID5 array. If there is a drive failure in 2nd or 3rd array this dedicated hotspare will not be involved there. But if the array for which this is dedicated has any drive failure this dedicated hotspare takes over.
If we have a Hardware RAID controller card it gives an option while machine booting to enter into RAID BIOS utility. Here we have options which give us options to create RAID using a semi-GUI(DOS based GUI) interface.
Once we install device drivers & also RAID config or management utility using that we can configure RAID in OS level.
In order for RAID to function, there needs to be software either through the operating system or via dedicated hardware to properly handle the flow of data from the computer system to the drive array. This is particularly important when it comes to RAID 5 due to the large amount of computing required to generate the parity calculations.
In the case of software implementations, CPU cycles are taken away from the general computing environment to perform the necessary tasks for the RAID interface. Software implementations are very low cost monetarily because all that is necessary to implement one is the hard drives. The problem with software RAID implementations is the performance drop of the system. In general, this performance hit can be anywhere from 5% or even greater depending upon the processor, memory, drives used and the level of RAID implemented. Most people do not use software RAID anymore due to the decreasing costs of hardware RAID controllers over the years.
Hardware RAID has the advantage of dedicated circuitry to handle all the RAID drive array calculations outside of the processor. This provides excellent performance for the storage array. The drawbacks to hardware RAID have been the costs. In the case of RAID 0/1 controllers, those costs have become so low that many chipset and motherboard manufacturers are including these capabilities on the motherboards. The real costs rest with RAID 5 hardware that require more circuitry for added computing ability.
RAID 5 or RAID 6 better for redundancy (availability)
The CCIE Data Centre blueprint makes mention of NPV and NPIV, and Cisco UCS also makes heavy use of both topics, topics that many may be unfamiliar with most. This post (part of my CCIE Data Centre prep series) will explain what they do, and how they’re different.
(Another great post on NPV/NPIV is from Scott Lowe, and can be found here. This is a slightly different approach to the same information.)
NPIV and NPV are among the two most ill-named of acronyms I’ve come across in IT, especially since they sound very similar, yet do two fairly different things. NPIV is short for N_Port ID Virtualization, and NPV is short for N_Port Virtualization. Huh? Yeah, not only do they sound similar, but the names give very little indication as to what they do.
First, let’s talk about NPIV. To understand NPIV, we need to look at what happens in a traditional Fibre Channel environment.
When a host plugs into a Fibre Channel switch, the host end is called an N_Port (Node Port), and the FC switch’s interface is called an F_Port (Fabric Port). The host has what’s known as a WWPN (World Wide Port Name, or pWWN), which is a 64-bit globally unique label very similar to a MAC address.
However, when a Fibre Channel host sends a Fibre Channel frame, that WWPN is nowhere in the header. Instead, the host does a Fabric Login, and obtains an FCID (somewhat analogous to an IP address). The FCID is a 24-bit number, and when FC frames are sent in Fibre Channel, the FCID is what goes into the source and destination fields of the header.
Note that the first byte (08) of the FCID is the same domain ID as the FC switch that serviced the host’s FLOGI.
In regular Fibre Channel operations, only one FCID is given per physical port. That’s it. It’s a 1 to 1 relationship.
But what if you have an ESXi host, for example, with virtual fibre channel interfaces. For those virtual fibre channel interfaces to complete a fabric login (FLOGI), they’ll need their own FCIDs. Or, what if you don’t want to have a Fibre Channel switch (such as an edge or blade FC switches) go full Fibre Channel switch?
NPIV lets a FC switch give out multiple FCIDs on a single port. Simple as that.
The magic of NPIV: One F_Port gives out multiple FCIDs on (0×070100 to the ESXi host and 0×070200 and 0×070300 to the virtual machines)
NPV: Engage Cloak!
NPV is typically used on edge devices, such as a ToR Fibre Channel switch or a FC switch installed in a blade chassis/infrastructure. What does it do? I’m gonna lay some Star Trek on you.
NPV is a clocking device for a Fibre Chanel switch.
Wait, did you just compare Fibre Channel to a Sci-Fi technology?
How is NPV like a cloaking device? Essentially, an NPV enabled FC switch is hidden from the Fibre Channel fabric.
When a Fibre Channel switch joins a fabric, it’s assigned a Domain_ID, and participates in a number of fabric services. With this comes a bit of baggage, however. See, Fibre Channel isn’t just like Ethernet. A more accurate analogue to Fibre Channel would be Ethernet plus TCP/IP, plus DHCP, distributed 802.1X, etc. Lots of stuff is going on.
And partly because of that, switches from various vendors tend not to get a long, at least without enabling some sort of Interopability Mode. Without interopability mode, you can’t plug a Cisco MDS FC switch into say, a Brocade FC switch. And if you do use interopability mode and two different vendors in the same fabric, there are usually limitations imposed on both switches. Because of that, not many people build multi-vendor fabrics.
Easy enough. But what if you have a Cisco UCS deployment, or some other blade system, and your Fibre Channel switches from Brocade? As much as your Cisco rep would love to sell you a brand new MDS-based fabric, there’s a much easier way.
A switch in NPV mode is invisible to the Fibre Channel fabric. It doesn’t participate in fabric services, doesn’t get a domain ID, doesn’t do fabric logins or assign FCIDs. For all intents and purposes, it’s invisible to the fabric, i.e. cloaked. This simplifies deployments, saves on domain IDs, and lets you plug switches from one vendor into a switch of another vendor. Plug a Cisco UCS Fabric Interconnect into a Brocade FC switch? No problem. NPV. Got a Qlogic blade FC switch plugging into a Cisco MDS? No problem, run NPV on the Qlogic blade FC switch (and NPIV on the MDS).
The hosts perform fabric logins just like they normally would, but the NPV/cloaked switch passes FLOGIs up to the NPIV enabled port on the upstream switch. The FCID of the devices bears the the Domain ID of the upstream switch (and appears directly attached to the upstream switch).
The NPV enabled switch just proxies the FLOGIs up to the upstream switch. No fuss, no muss. Different vendors can interoperate, we save domain IDs, and it’s typically easier to administer.
TL;DR: NPIV allows multiple FLOGIs (and multiple FCIDs issued) from a single port. NPV hides the FC switch from the fabric.
SCSI [Small Computer System Interface]
Introduced by ANSI in 1986.
An I/O bus interconnecting computers and peripheral devices.
Generic interface for different devices from different vendors.
Easy addition of new SCSI devices.
Allows for faster data input and output.
Ability to process multiple overlapped commands.
Initial version as SCSI-1 .
Defined the basics of SCSI Bus.
Talked about cable length, signaling characteristics.
Defined various Commands and Transfer modes.
Revised and upgraded to SCSI-2 .
Defined the new features as enhancements to SCSI-1.
Incorporated the backward compatibility with SCSI-1.
Recent version is SCSI-3
Much more features added to SCSI- 2.
Defined the basic 8 bit data bus.
Maximum transfer speed was limited to 5 MB/s.
Had difficulties in acceptance, as many vendors implemented only subsets of this protocol.
Doubling of the speed of data transfer to 10MB/s with regular SCSI.
Increasing the width of the SCSI bus from 8 bits to 16/32 bits.
This increases the data throughput of the devices.
More number of devices per bus
On buses with Wide SCSI, 16 devices are supported, as opposed to 8 for regular SCSI.
Incorporated new higher density connections.
Active termination which provided the reliable termination of the bus.
Introduced the concept of having multiple outstanding requests on the bus at any point of time.
Included new command sets to support the use of more devices like CD-ROMs, scanners and removable media. SCSI-1 focussed more on supporting hard disks.
Ultra SCSI
Further doubling of system bus speed resulting in data transfer rates up to 20MB/s for regular and 40MB/s for Wide SCSI.
Serial SCSI
Uses Serial SCSI as one of its new protocol standards (also called FireWire).
Improvements over SCSI-2 for tuhe use of Wide SCSI.
SCSI – Electrical Signal types
Single Ended
Conventional signaling as used on other buses.
A positive voltage indicates a logical TRUE and no voltage FALSE.
Each signal is carried on one wire.
Flexible and cost effective, as is very common.
Cable length, however, is highly limited due to the effect of noise.
Differential
Each signal is carried by two wires, each the mirror of the other.
A positive voltage on one wire and an equal negative voltage on the other indicates a TRUE.
Zero voltage(electrical ground) on both the wires indicates a FALSE.
Use of two conductors makes this scheme more resilient and immune to electrical noise.
Cost is much more higher than single ended.
Back to the technical jargon!!
SPEED
BUS WIDTH
Regular
Bus width of SCSI Bus
Narrow
8 bit wide data bus.
16 bit wide data bus.
Unique identification number assigned to each device on the bus.
Used to identify devices on the bus.
For Narrow SCSI, ID ranges from 0 to 7.
For Wide SCSI, ID ranges from 0 to 15.
ID determines the priority of the device during arbitration.
Larger the ID, higher the priority.
Generally the host adapter is at ID 7 (highest priority).
Used to prevent signal reflections from entering the SCSI bus.
Provided by using a terminator.
Communication on the SCSI bus is allowed between only two SCSI devices at any given time.
Of the two devices on the SCSI bus, one acts as the initiator and the other as target.
A device usually has a fixed role as an initiator or target Some devices can assume either role.
Certain SCSI bus functions are assigned to the initiator and certain to the target.
Data transfer takes place through the mechanism of handshaking signals.
11 signals for control, 36 for data.
Important signals:
BSY – Bus being used.
SEL – Used by initiator/target to select/reselect a target/initiator.
C/D – Control or Data information on the bus.
I/O – Indicates the direction of data transfer w.r.t the initiator.
MSG – Used during message phase.
REQ – request for an ACK handshake
ACK- acknowledgement of a REQ.
ATN- signal driven by an initiator to indicate to a target that the initiator has a message ready.
SCSI Architecture has 8 distinct phases.
Indicates that there is no current I/O process on the bus.
SCSI bus is available for connection.
Devices detect this phase after the BSY and SEL signals are both false.
Generally this phase is entered, when a target releases its BSY signal.
Allows one SCSI device to gain control of the SCSI bus to initiate or resume an I/O process.
Following steps occur while a SCSI device gains control of the SCSI bus:
Device waits for the BUS FREE phase to occur.
After the Bus Free phase occurs, the device waits for a specified amount of time before driving any signal.
Device then arbitrates for the SCSI bus by asserting the BSY signal and its own SCSI ID bit on the data bus.
Device waits for an arbitration delay, before examining the data bus.
If a higher priority SCSI ID bit is found on the data bus, the device has lost its arbitration and releases the BSY signal and returns to step 1.
Else, the device has won the arbitration. It then asserts the SEL signal.
Allows an initiator to select a target to carryout target functions (READ, WRITE).
I/O signal is negated during this phase to differentiate it from RESELECTION phase.
Selection is done by the following mechanism:
Initiator sets the data bus to a value that is the OR of its SCSI ID and the target SCSI ID bit.
It then asserts the ATN signal indicating a MESSAGE OUT Phase is to follow.
Then the initiator releases the BSY signal.
The target shall determine that it is selected when the SEL and its SCSI ID are high and BSY and I/O lines are low.
It then scans the data bus to find the initiator and asserts the BSY signal.
If more than two SCSI ID bits are on the data bus, the target shall not respond to selection.
Allows a target to reconnect to an initiator to continue some I/O operation that was previously started by the initiator and suspended by the target.
Reselection follows the mechanism stated below:
After Arbitration phase, the winning device asserts the BSY and SEL lines.
The winning SCSI device becomes a target by asserting the I/O signal.
It then puts its SCSI ID on the data bus OR ed with the ID of the initiator.
The device then releases the BSY signal.
The initiator, on seeing its ID on the data bus along with SEL and I/O being true and BSY being false, determines that it is reselected.
It then finds the target by scanning the data bus.
The reselected initiator shall then assert the BSY signal.
After the target finds that BSY signal is asserted, it releases the SEL signal.
Upon this, the initiator releases the BSY signal and the target asserts the same.
The I/O then continues normally until the target drops the SEL signal.
Consists of the Command, Data, Status and Message Phases.
C/D, I/O, MSG signals are used to distinguish between the different phases.
The information transfer phases use one or more REQ/ACK handshake signals to control the information transfer.
During the information transfer phases, the BSY signal shall be True and the SEL signal, False.
Two modes of information transfer.
Target controls the direction of data transfer through the I/O signal.
I/O signal true indicates data transfer from target to the initiator.
If I/O is true, target puts data on the data bus, asserts REQ signal.
Target puts data on the data bus and drives the REQ signal to true.
Initiator reads the data when it finds the REQ signal true, then asserts the ACK signal.
When ACK is true, target may change the data on the data bus and then release REQ.
When REQ becomes false, initiator releases the ACK signal.
After this, the target then can either continue the data transfer or stop it.
I/O signal false indicates data transfer from initiator to the target
Initiator requests for data transfer by asserting the REQ signal.
Initiator puts the data when it finds the REQ signal true, then asserts the ACK signal.
Target reads the data when it finds the ACK signal true, then negates the REQ signal.
When REQ is false, initiator may change the data on the data bus and then negate ACK.
After this, the target then can either continue the request for data transfer or stop it.
Used generally for Data Transfer Phase.
Used only if a prior agreement is reached using messaging phases.
The offset for REQ/ACK signals is also reached upon then.
Offset specifies the number of REQ signals the target sends in advance of the number of ACK signals received from the initiator.
For each REQ signal, one byte of data is transferred and an ACK pulse is sent.
Different phases during Information Transfer
Command Phase
Allows the target to request command information from the initiator.
DATA IN – Allows the target to request that data be sent to the initiator from the target.
DATA OUT – Allows the target to request that data be sent to the target from the initiator.
Status Phase
Allows the target to request that status information be sent to the initiator from the target.
MESSAGE IN – Allows the target to request that messages be sent to the initiator from the target.
MESSAGE OUT – Allows the target to request that messages be sent to the target from the initiator.
Target invokes Message Out phase in response to the Attention condition from the initiator by ATN signal.
Two conditions, which can cause the SCSI device to perform certain actions and alter the phase sequence.
Attention Condition – Allows the initiator to inform the target that it has a message ready. Target reads it by performing a Message Out Phase.
Reset Condition – Immediately clears all SCSI devices from the bus. This condition takes precedence over all the other phases on the bus.
Allows communication between target and initiator for interface management.
Messages may be of one, two or multiple bytes in length.
One or more messages may be sent during a single MESSAGE Phase.
Some of the generally used messages:
Abort – Sent from initiator to target for clearing any I/O process.
Command Complete – Sent from target to initiator to indicate the completion of an I/O process and that valid status has been sent to the initiator.
Disconnect – Sent from T to I indicating that the present connection is being broken and will be later resumed by invoking the RESELECTION phase.
SCSI Commands follow the pattern of the Command Descriptor Block (CDB) to the target.
First byte in any SCSI Command should contain the valid opcode.
Opcodes are classified into Optional, Mandatory and Vendor specific.
Some fields in the CDB are reserved and are set to zero.
If a target receives a CDB with the reserved fields not being set to zero, shall terminate the command with a CHECK CONDITION status.
Three groups – 0- 6 byte
3 and 4 reserved . 6 and 7 are vendor specific.
Format of a 6-byte CDB
The transfer length field specifies the amount of data to be transferred, usually the number of blocks.
Commands that use one byte for the transfer length allow up to 256 blocks of data to be transferred by one command.
Shall be sent from the target to the initiator during the STATUS Phase at the completion of each command.
General status conditions:
Good – Target successfully completed the command.
Check condition – Indicates that contingent allegiance occurred.
Busy – Indicates that target is busy.
Command Terminated – Returned whenever the target terminates the I/O process.
Test unit ready : Queries device to see if it is ready for data transfers (disk spun up, media loaded, etc.). Opcode = 0x00
Inquiry : To obtain basic information from a target device. Opcode =12h
Request sense : Returns any error codes from the previous command that returned an error status. 3h.
Send diagnostic: requests the target to perform a self-test. Test is standardized and response is GOOD if all is well or CHECK CONDITION status if test fails. 1Dh
Start/Stop unit: Spins disks up and down, load/unload media. 1Bh
Read capacity: Returns storage capacity. 25h
Format unit: Sets all sectors to all zeroes, also allocates logical blocks avoiding defective sectors.
Ftp Read (four variants): Reads data from a device.
Write (four variants): Writes data to a device.
Log sense: Returns current information from log pages.
Mode sense: Returns current device parameters from mode pages.
Mode select: Sets device parameters in a mode page. Advertisement
What is RSCN and how to do a RSCN?
RSCN stands for Registered State Change Notification. RSCN is a process of sending a notification frame to all devices when a change happens in a fabric. Below are a few scenarios when an RSCN can take place. When a new device adds to the fabric. When a device is removed from the fabric. A zone has changed. A switch name has changed or an IP address has changed. Nodes leaving or joining the fabric, such as zoning, powering on or shutting down a device, or zoning changes
FAQ
How do you Zoning a SAN switch?
What is SAN switch fabric?
- admin> switchshow.
- Note:Any devices that connect using NPIV use nsshow command to find the WWPN addresses logged in to the port. …
- 2.Run the cfgshowcommand.
- admin> cfgshow.
- cfgshow will display the configuration and how the zoning is currently configured.
How do you connect two SAN switches together?