Section 1 – Implement and
Manage Storage
Objective 1.1 – Implement Complex Storage Solutions
Determine use cases
for and configure VMware DirectPath I/O
Prerequisites
Enable high performance network I/O on at least one Cisco
UCS port profile on a supported Cisco VM-FEX distributed switch. For supported
switches and switch configuration, see Cisco's documentation at
http://www.cisco.com/go/unifiedcomputing/b-series-doc.
Power off the virtual machine.
Procedure
1 Locate the
virtual machine in the vSphere Web Client.
a
To locate a virtual machine, select a datacenter, folder, cluster, resource
pool, or host and click the Related Objects tab.
b Click Virtual Machines and
select the virtual machine from the list.
2 Click the
Manage tab of the virtual machine, and select Settings > VM Hardware.
3 Click Edit.
4 Click the
Virtual Hardware tab.
5 Expand the
Memory section, and set the Limit to Unlimited.
6 Expand the
Network adapter section to configure a passthrough device.
7 Select a
port profile with high performance enabled from the network drop-down menu and
click OK.
8 Power on the
virtual machine.
After the
virtual machine is powered on, DirectPath I/O appears as active on the Hardware
tab.
Determine
requirements for and configure NPIV
§ NPIV works only
with RDM disks
1 Select a virtual machine.
■ In the virtual machines and templates inventory tree, select a group
of virtual machines and select a virtual machine from the list on the right.
■ Search for a virtual machine and select it from the search results
list.
2 In the VM Hardware panel, click Edit Settings.
3 Click VM Options.
4 Click the Fibre Channel NPIV triangle to expand the NPIV options.
5 (Optional) Select the Temporarily Disable NPIV for this virtual
machine check box.
6 Select an option for assigning WWNs.
■ To leave WWNs unchanged, select Leave unchanged.
■ To have vCenter Server or the ESXi host generate new WWNs, select
Generate New WWNs.
■ To remove the current WWN assignments, select Remove WWN assignment.
7 Click OK.
Understand use cases
for Raw Device Mapping
1 Select a
virtual machine.
■ In the virtual machines and templates inventory
tree, select a group of virtual machines and select a virtual machine from the
list on the right.
■ Search for a virtual machine and select it from the
search results list.
2 In the VM
Hardware panel, click Edit Settings.
3 Click
Virtual Hardware.
4 From the Add
a device drop-down menu, select RDM Disk and click Add device.
5 Select the
target LUN for the raw device mapping and click OK.
The disk
appears in the virtual device list.
6Select the
location for the mapping file.
■To store the mapping file with the virtual machine
configuration file, select Store with the virtual machine.
■To select a location for the mapping file, select
Browse and select the datastore location for the disk.
7 Select a
compatibility mode.
Physical
Allows the guest operating
system to access the hardware directly. Physical compatibility is useful if you
are using SAN-aware applications on the virtual machine. However, a virtual
machine with a physical compatibility RDM cannot be cloned, made into a template,
or migrated if the migration involves copying the disk.
Virtual
Allows the RDM to behave as
if it were a virtual disk, so you can use such features as taking snapshots,
cloning, and so on. When you clone the disk or make a template out of it, the contents
of the LUN are copied into a .vmdk virtual disk file. When you migrate a virtual compatibility mode RDM,
you can migrate the mapping file or copy the contents of the LUN into a virtual
disk.
8 Accept the
default or select a different virtual device node.
In most cases,
you can accept the default device node. For a hard disk, a nondefault device
node is useful to control the boot order or to have different SCSI controller
types. For example, you might want to boot from an LSI Logic controller and
share a data disk with another virtual machine using a Buslogic controller with
bus sharing turned on.
9(Optional) If
you selected virtual compatibility mode, select a disk mode.
Disk modes are
not available for RDM disks using physical compatibility mode.
Description
Dependent - Dependent
disks are included in snapshots.
Independent – Persistent - Disks in persistent mode
behave like conventional disks on your physical computer. All data written to a
disk in persistent mode are written permanently to the disk.
Independent – Nonpersistent - Changes to disks in nonpersistent mode are
discarded when you power off or reset the virtual machine. With nonpersistent
mode, you can restart the virtual machine with a virtual disk in the same state
every time. Changes to the disk are written to and read from a redo log file
that is deleted when you power off or reset.
10Click OK.
Configure vCenter
Server storage filters
1.
In the vSphere Client, select Administration
> vCenter Server Settings.
2.
In the settings list, select Advanced Settings.
3.
In the Key text box, type a key.
*
config.vpxd.filter.vmfsFilter -> VMFS Filter
*
config.vpxd.filter.rdmFilter -> RDM Filter
* config.vpxd.filter.SameHostAndTransportsFilter
-> Same Host and Transports Filter
*
config.vpxd.filter.hostRescanFilter -> Host Rescan Filter
4.
In the Value text box, type False for the
specified key.
5.
Click Add.
6.
Click OK.
Understand and apply
VMFS re-signaturing
1 Click the
Create a New Datastore icon.
2 Type the
datastore name and if required, select the placement location for the
datastore.
3 Select VMFS
as the datastore type.
4 From the
list of storage devices, select the device that has a specific value displayed
in the Snapshot Volume column.
The value
present in the Snapshot Volume column indicates that the device is a copy that
contains a copy of an existing VMFS datastore.
5 Under Mount
Options, select Assign a New Signature and click Next.
6 Review the
datastore configuration information and click Finish
Understand
and apply LUN masking using PSA-related commands
esxcfg-scsidevs
-m — the -m
esxcfg-mpath
-L | grep naa.5000144fd4b74168
esxcli
storage core claimrule add -r 500 -t location -A vmhba35 -C 0 -T 1 -L 0 -P
MASK_PATH
esxcli
storage core claimrule load
esxcli
storage core claiming reclaim -d naa.5000144fd4b74168
Procedure
1. Find device name of the Datastore wanting to hide: esxcfg-mpath –L
OR esxcfg-scsidevs -m
2. Check available Claim Rules: esxcli storage core claimrule list
3. Assign the plug-in to a path by creating a new Claim Rule for the
plug-in (hint: may need for each path since it’s probably redundant..so for
example on vmhba33 and vmhba34 but this ex only shows for 1 HBA & 1
path…will need 4 total cmds, 2 for each HBA): esxcli storage core claimrule add
–r 500 –t location –A vmhba33 –C 0 –T 1 –L 1 –P MASK_PATH
4. Load Claim Rule: esxcli storage core claimrule load
5. Verify Claim Rule was added: esxcli storage core claimrule list
6. Unclaim PSA to a device: esxcli storage core claiming reclaim –d
naa.UUID
7. Run the path Claim Rules: esxcli storage core claimrule run
8. Verify Mask applied: Host > Configuration tab > Storage >
Refresh the view, then Rescan
a. Verify via Shell: esxcfg-scsidevs -m ; to see all Masked LUNs:
esxcfg-scsidevs -c
b. Also can check if it’s active: esxcfg-mpath -L | grep naa.UUID