Home

EMC Host Connectivity Guide for VMWare ESX Server

image

Contents

1. sse 48 Host reported as attached but manually registered example 49 SCSI commands encapsulated by Ethernet headers 50 VMkernel and service console on a single vSwitch 51 Two NICs on a single vSwitch iSCSI configuration s 57 Two NICs in dual vSwitch iSCSI configuration 58 vowitchiconfigralion sees ect rires pee ete PE EEEE 59 Cisco Unified Computing System example moiair 91 SMCntertace example eerie Ei e REE 99 Device Masking and Mapping Masking dialog box 100 Masking Set LUN Addresses dialog box sss 101 Set Port Attributes dialog box sss 102 Register Initiator Record window sss 109 Storage Group Properties window sse 110 Software Installation Wizard sss 112 Software Installation Wizard window sss 113 Non Disruptive Upgrade Delay window seen 114 Software Package Selection window sses sisser erekere EEEE 116 Software Operation Progress History window 1 117 LUINS eXample omnes eec vere errant na OE 119 View LUN properties eee 120 MENU S 121 EMC Host Connectivity Guide for VMware ESX Server Confirm trespass TL UNS entree ceteri bie E nner 122 Report on LUNS 2 rtr Hep etr rte ee d 1
2. Established Virtual Provisioning Create thin metadevices on page 218 Unbind thin device TDEV and draining data devices TDAT on page 218 Further information on page 219 ESX 4 1 RTM with Solutions Enabler Symmetrix VMAX SE 2x 10 GB Thin device TDEV 4x 4 GB Data device TDAT 1x Thin pool Create thin devices TDEV Create data devices TDAT Create a thin pool Add data devices to the thin pool Bind thin devices to the thin pool Create two numbers of 10 GB TDEV using the symconfigure command FBA emulation is chosen for VMware ESX host lt sid gt is the last two digits of the Symmetrix serial number numberOfTDEV is for the number of thin devices to create lt n gt is for the numeric size of the LUN in either MB GB CYLS cmd create dev count lt numberOfTDEV gt size lt n gt MB config TDEV prepare nop cmd create dev count lt numberOfTDEV gt size lt n gt MB config TDEV commit nop Virtual Provisioning with VMAX or Symmetrix E Virtual Provisioning Processing symmetrix 000194900098 Performing Access checkS 2 0 4 2 49 3 v 9 x v Oy S SS Allowed Checking Device ReservationS isssed ie b ai saira a dea Rs e ROS Allowed Initiating COMMIT of configuration changes Queued COMMIT requesting required resources Obtained Locals COMMIT keii ai ae ew Es each eee we E a EAR See ERR Do
3. This chapter provides HBA and iSCSI configuration information for the VMware ESX Server with VMAX and Symmetrix and VNX series and CLARiiON systems e Pibre Channelseumienisconapuebre qued apo oar amener ieu 34 EDI C PUE TN 50 e FICoEinitator config uralons ssi oteii niri iib o PUO E 71 Connectivity Fibre Channel Fibre Channel or FC is a gigabit speed network technology A Fibre Channel SAN is a collection of Fibre Channel nodes that communicate with each other typically through fibre optic media Node A node is defined as a member of the Fibre Channel network A node is provided a physical and logical connection to the network by a physical port on a Fibre Channel switch Every node requires the use of specific drivers to access the network Fabric switches Fibre Channel nodes communicate with each other through one or more Fibre Channel switches also called fabric switches The primary function of a fabric switch is to provide a physical connection and logical routing of data frames between the attached devices Fabric zoning With ESXi hosts use a single initiator zoning or a single initiator single target zoning Single initiator single target is the preferred zoning practice IMPORTANT EMC does not support multi initiator zones in a VMware ESX Server fabric environment Zoning should be performed on the fabric by creating zone sets that contain the initiator and the target s VMAX and
4. Virtual Provisioning with VNX series and CLARiiON 195 Virtual Provisioning with VMAX or Symmetrix 208 Implementation considerations e tentent 220 Virtual Provisioning 185 Virtual Provisioning Terminology Thin LUN Thin Pool User Capacity Usable Pool Capacity Virtual Provisioning It is challenging for storage administrators to forecast how much storage space will be required by the various applications in their data centers Administrators typically allocate space based on anticipated future storage growth They do this to reduce the need to add storage later on while applications are running This can result in over provisioning storage capacity Over provisioning also leads to increased power cooling and floor space requirements Even with careful planning it may still be necessary to provision additional storage in the future which may require application downtime depending on the operating systems involved The term Virtual Provisioning applies to EMC s family of provisioning technologies EMC Virtual Provisioning presents more storage to an application than is physically available Additional physical storage is allocated automatically when writing new data blocks Virtual Provisioning can improve storage capacity utilization and simplify storage management by presenting the application with sufficient capacity for an extended period of time This
5. clarion iRiministraor cia Pe EA S B Applet comemcnsvsphereguig kONDMn stated T fme Figure 7 Connectivity Status dialog 5 Inorder to manually register your host s HBAs select the WWN of the HBA and click on Register You will be prompted to add the server information such as the server name and IP address Repeat this step for each HBA instance for this VMware ESX Server host This will register this HBA or HBAs for your VMware host 6 Another dialog Register Initiator Record appears When attaching a VMware ESX Server to a VNX series and CLARiiON the standard For a failover enabled environment the required settings are as follows Initiator Type CLARiiON Open ArrayCommPath Enabled FailOverMode 1 Unit Serial Number Array EMC Host Connectivity Guide for VMware ESX Server Connectivity Note that the failover functionality referred to here is the native failover functionality incorporated into the VMkernel not PowerPath PowerPath is not available for the VMkernel Figure 8 on page 47 shows an example of registering for a failover enabled environment Note The box for the ArrayCommPath parameter is checked and the Failover Mode is set to 1 J Navisphere 6 Microsoft Internet Explorer provided by EMC Corp 1 np 172 23 168 179 start html File View Tools Window Help Enterprise Storage 1 op pt Li x rinitietor information A AmR A i m mpia Aapia A pmaAimm A a sf se
6. PROVEN EMC Host Connectivity Guide for VMware ESX Server P N 300 002 304 REV 45 EMC Corporation Corporate Headquarters Hopkinton MA 01748 9103 1 508 435 1000 www EMC com Copyright 2015 EMC Corporation All rights reserved Published November 2015 EMC believes the information in this publication is accurate as of its publication date The information is subject to change without notice THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE Use copying and distribution of any EMC software described in this publication requires an applicable software license EMC EMC and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United State and other countries All other trademarks used herein are the property of their respective owners For the most up to date regulator document for your product line go to EMC Online Support https die Rar EN EMC Host Connectivity Guide for VMware ESX Server Contents hd os EE A N EAE AE AEE AEA E AE A ASA 11 Chapter 1 Introduction to VMware Infrastructure vSphere VMware vophBte eee tti e tet A EEES 18 VOphere EE 18 Mens 18 VMware ESX ESXi Server
7. eese etre 19 VMketn el unie ertet etri rino treat eh ce Er Pede fines 19 Service CONSOLE ote rte o oin ea SERENE 20 Useful VMware ESX ESXi Server utilities and functions 21 Control interface ss sss nosci ire eee rer fore cep 23 VMware Web UL nor trieste ertt ert x dr prn 23 VMware vSphere Client see 23 VMware vCenter Server usneseni 23 Connecting EMC storage with ESX ESXi Server 24 Chapter 2 Installation of ESX ESXi Server Installation iecit en o a ERR HR EU AELE ent 26 Installation media oerte ORE edet 26 Installation methods 2 2 0 edere eoe 26 Initiator setting and installations sacros 28 Host Bus Adapter HBA nece cement 28 IDSC SD CAE crore mre varies te rU a ses 29 Converged Network Adapter CNA sss 30 Adapter installation 31 Install adapter Card iere e tenente teneret 31 R commendatiofis 2 rrr rete bert NR 32 EMC Host Connectivity Guide for VMware ESX Server Chapter 3 Connectivity Fibre Channel retire ieri repite pepe ien 34 Fabric ZONING donnees m deitate e ftot he 34 VMAX and Symmetrix connectivity seios 34 VNX series and CLARiiON connectivity sss 40 js o 50 VMware ESX SW iSCSI cecceccecesesestenenesesesneeeseecenenesesesnans 51 VMware ESX HW SG re trt 63 VMAX or Symmetrix connectivity sss 64 VNX series and CLARION connectivity
8. sss 172 Multipathing in VMware ESXi ESX 5 x and ESXi 4 x with VPLEEX cet intent ertet re te reine iren 173 ESX Server 45e eniecinemeeaomos er dd tores 175 ESX ESXi 4 x and ESXi BX certet tern ceetnrises 175 PowerPath VE for VMware ESX ESXi 4 x and ESXi 5 x 177 Major components ssssssseeeeeenenrerehnnntrerennnnnns 178 Supported storage types sssrin E p 178 PowerPath commands oett 179 Claim rules nier aeter nter sade vescavetecsiceussnetocsecasists 180 Appendix A Migration Considerations EOK IUK 3 5 add tama a diimmeie R E id axedhrisd mre aes 182 EGA OD 183 BESX BSXi 4x and ESXi 5X aca atentadrasrantacka arenis 184 EMC Host Connectivity Guide for VMware ESX Server IE Appendix B Appendix C Appendix D Appendix E Virtual Provisioning Virtual Provisioning eer piter tee teen 186 TermunoloBy opositor 186 Traditional versus Virtual thin Provisioning 188 Monitoring adding and deleting pool capacity 189 Virtual LUN architecture and features sss 191 VMware Virtual Machine File System with thin LUN 193 ESA DW M 193 a 194 Virtual Provisioning with VNX series and CLARiiON 195 Virtual Provisioning on VMware ESX v3 x s 195 Virtual Provisioning on VMware ESX v4 x
9. s 67 FCoE initiator configurations sss 71 Configuring Emulex FCoE CNAs with VMware ESX DGEVOE i cunamacsreniaree niii nii io mic niquioie rides 7 Configuring OLogic FCoE CNAs with VMware ESX s Guantecasmcanipes 80 Configuring Brocade FCoE CNAs with VMware ESX SLA aE E deni n nita mica uie iecies 85 Cisco Unified Computing System with FCOE 90 Configuring FCoE for Intel Ethernet Server Adapter with VMware ESX SOC Vet usciti tereti tenen Hetero 92 Chapter 4 Array Information and Configuration VMAX and Symmetrix array configurations ssss 94 Required storage system configuration sss 94 Addressing VMAX or Symmetrix devices 94 Required director bit settings for HP UX 11iv3 HP UX 11 91 dri a r coiere rtr re c n xeu 98 EMC Symmetrix Management Console SMC 99 Recommendations for optimal performance 103 ESX host in the VNX series and CLARiiON environment 106 VNX series and CLARiiON failover modes 106 Adding the VMware ESX server host to a storage QIOUP oT T 109 Performing a VNX series and CLARiiON NDU with VMware ESX server hosts siirius siaii 110 Manual trespass on VNX series and CLARiiON systems to recover the original path 117 EMC VPLEX acini M 124 VPLEX documentation nti termes 124 Prerequisttegiconamenit emo
10. ss 198 Virtual Provisioning with VMAX or Symmetrix 208 Main components entente tete 208 Management tools neni be meet 209 Virtual Provisioning on EMC VMAX or Symmetrix 210 Implementation considerations sss 220 Virtual Machine File System VMEPS datastore sssrini ei ea Hr EE EE REA EH Tende 222 Volume alignment iecse nre ret etnies rina 223 Version histOLty 5b sinesi ee rene eben rire 224 DUS rre ic M 225 Raw Disk Usage and Mapping Raw disk usage and Caveats siisii isisisi 228 Raw Device Mappings RDM sese 229 Boot from SAN Booting from VMAX or Symmetrix storage arrays 232 Booting from VNX series and CLARiiON storage systems 234 EMC Host Connectivity Guide for VMware ESX Server WO OAONDTHWNRFR Title Page One host two switches and one VMAX or Symmetrix array 36 One host two switches and four VMAX or Symmetrix arrays 37 SMC Properties verification screen ssssssseeeeeeeeeenne 38 Solution Enabler CLI verification screen sese 39 One host two switches and one VNX series or CLARiiON systems 42 One host two switches and four VNX series or CLARiiON systems 43 Connectivity Status dialog nre eerte prre bdo 46 Registering for a failover enabled environment example 47 Warning message example
11. t SPD Faults Fe RAID Groups Disk Summary Storage Grou 8 Software Operations Software Installation Wizard E 128LUNs E Ty H340 Connectivity Status Software Operation Progress History gt 8201040 Engineering Mode 8201042 Update Now EE 8201053 Properties z B8 i117 Monitor Using Template BE 820i1 24 Stop Monitoring Analyzer HRSEHHEHTEHTEIHAEHIEHTEHTEIRS EH A Figure 23 Software Installation Wizard The Software Installation Wizard will guide you through the upgrade 6 Install UnisphereBlock enabler for VNX series after the NDU process complete successfully For an example on CLARiiON systems through Navisphere Manager refer to Figure 24 on page 113 and the instructions following BIN EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration Storage System Software Installation CX700_3 12 70 O xl Welcome s Welcome to the Software Installation Wizard This wizard will quide you through the process of installing software onto storage system CX7OO0 3 12 70 The wizard steps are 1 Determine that you have no failed storage processors 2 Select software to install 3 Report trespassed LUNs 4 Review installation information 5 Start installation and monitor status Next gt Finish Cancel Figure 24 Software Installation Wizard window Click Next to continue The
12. Considerations EMC Enginuity code versions 5671 58 64 and later for DMX and DMX 2 and 5771 87 95 and later for DMX 3 provide support for compliance with newer SCSI protocol specifications specifically SCSI Primary Commands 2 SPC 2 as defined in the SCSI document at http www t10 org EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration The SPC 2 implementation in Enginuity includes functionality which based on OS and application support may enhance disk attach behavior to use newer SCSI commands optimized for a SAN environment as implemented in Fibre Channel as opposed to legacy non SPC 2 functionality which was targeted for older SCSI implementations utilizing physical SCSI bus based connectivity which cannot leverage the enhanced functionality of newer SCSI specifications In environments sharing director ports between hosts with multiple vendor operating systems ensure that all hosts operating systems are capable of supporting the SPC 2 functionality before enabling it on the port If any OS sharing the affected director port does not support SPC 2 functionality the SPC 2 bit cannot be set on a per port basis and must be set on a per initiator basis using Solutions Enabler 6 4 CLI Refer to the EMC Solutions Enabler Symmetrix Array Controls CLI v6 4 CLI Product Guide available on http support EMC com for details regarding how to set the SPC 2 bit on a per initiator
13. D 060300 WWPN 50 06 01 61 10 60 12 5c WWNN 50 06 01 60 90 60 12 5c D 060400 WWPN 50 06 01 69 10 60 12 5c WWNN 50 06 01 60 90 60 12 5c When using QLogic HBAs the same information is logged in proc scsi qla2x00 N where N indicates the file for each adapter in the system when the driver is loaded grep the file to obtain the initiator and target information For example for a host with OLA23xx HBAs grep scsi qla proc scsi qla2300 0 produces output similar to the following for the first OLogic HBA scsi qla0 adapter node 200000e08b0910a7 scsi qla0 adapter port 210000e08b0910a7 scsi qla0 target 0 5006016810601270 scsi qla0 target 1 5006016010601270 scsi qla0 target 2 50060160082012bb scsi qla0 target 3 50060169082012bb EMC Host Connectivity Guide for VMware ESX Server Now that the WWNs have been identified the VMware ESX Server host can be registered to the VNX series or CLARiiON system The following section will describe the manual registration process Manually register the host In order to manually register the host on the VNX series or CLARiiON system perform the following steps 1 Start the management interface in a web browser on a host to be used for management purposes 2 Select the Storage tab so that the arrays being managed by the management interface are displayed 3 Right click on the appropriate array and select the Connectivity Status option 4 The Connectivity Status dial
14. For more details on the vihostupdate command see the vSphere Command Line Interface Installation and Reference Guide located at http www VMwarte com For ESX 3 5 U2 and later see the Remote Command Line Interface Installation and Reference Guide located at http www VMware com Verify that the driver is installed successfully 1 Run the esxupdate query command A message containing the information about the driver appears 2 View the PCI ID XML file in the etc VMware pciid directory The driver information is available in the file FCoE initiator configurations 3 Check for the latest version of the driver module in the following directory usr lib VMware vmkmod 4 Enter vmkload mod l command to verify that the driver is loaded and functioning The driver is listed in the displayed list Configuring QLogic FCoE CNAs with VMware ESX Server This section provides information on configuring QLogic FCoE CNAs with the VMware ESX Server including Installing the CNA on page 80 Updating the QLogic CNA boot BIOS on page 80 Updating the QLogic CEE Menlo firmware on page 82 Installing the OLogic CNA driver on page 82 Installing the CNA To install the CNA 1 Install VMware ESX Server 3 5 update 2 or later on your server For instructions on installing the operating system refer to the relevant documentation at http www VMware com 2 Power off the host system 3 With the
15. IMPORTANT ESX Server only supports installation with x 86 architecture servers The following are installation methods Graphical mode User can install ESX through graphical instructions e Text mode ESX is installed by reading text options and giving commands e Script install This appears in ESX 4 x and ESXi 5 which is used for quick install on first hard drive or deployment on multiple servers in a shorter time Refer to the following sections for specific recommendations for the following versions ESX 3 x on page 27 ESX 4 x on page 27 ESXi 5 on page 27 EM EMC Host Connectivity Guide for VMware ESX Server ESX 3 x ESX 4 x ESXi 5 Installation of ESX ESXi Server Note For the installation of ESX 3 x and ESX 4 x there are default partition sizes It is recommended you use the default sizes ESX 3 x is based on 32 bit modified Red Hat version 2 4 21 or newer kernel boot 101 MB primary swap 544 MB primary if 5 GB primary var log 2 GB Note For the installation of ESX 3 x and ESX 4 x there are default partition sizes It is recommended you use the default sizes ESX 4 x is based on the Red Hat Linux kernel which is a 64 bit system The default service console partition information is as follows boot 1 10 GB primary swap 600 MB primary var log 2GB All freshly installed hosts in vSphere 5 use the GUID Partition Table
16. on page 79 EMC Host Connectivity Guide for VMware ESX Server Install the Emulex CNA driver on the VMware ESX Server To install the Emulex CNA driver on the VMware ESX Server 1 Download the driver RPM package from http www emulex com If the driver is an rpm package complete the following instructions Note Currently the Emulex driver included in the VMware ESX Server 3 5 update 2 does not support Emulex FCoE CNAs Before installing the CNA driver downloaded in Step 1 the existing Emulex driver must be uninstalled as explained in Step 2 2 Uninstall the existing Emulex driver by completing the following steps a Execute the command rpm qa grep lpfc and record the name of the existing driver package b Execute the command rpm e existing driver where existing driver is the existing driver package fromStep a c Reboot the server 3 Install the Emulex FCoE CNA driver that was downloaded in Step 1 a Execute the command rpm Uvh force New driver RPM where New driver RPM is the driver package downloaded in Step 1 b Reboot the server Following the installation of the proper driver for the FCoE adapter the Fibre Channel interface will function identically to that of a standard Emulex Fibre Channel HBA as the LP21000 LP21002 simply encapsulates Fibre Channel traffic within ethernet frames Install drivers for devices as part of a new ESX installation for ESX only Note This procedure
17. 5 DJE 4 Dj Deeks 4 Ganor o T Oui 90 Dj Devices 4183 Meta Devices 17 Private Devices 38 Dj Pooks 4 Dj Replicabon Views Oynan Cache Pantons 000194900096 Loca 000290102606 Locat Di Device Groups 0 Tj Composte Groups 0 Figure 17 jproperties Syemetre Arrays al Syri Model Mcode Cache FLOLRDT Dire Devs Disks Sper 1200190300963 Local Canes 5773134 32768 9n 4401 cJ 8 000194900098 Local VMAX TSE 5874122 12288 680 2 120002901 02606 Local CM 24 7731 96304 TOO 10221 ns kr T eJ SMC interface example VMAX and Symmetrix array configurations Array Information and Configuration Device masking using You can mask VMAX or Symmetrix devices to an ESX host by using SMC the SMC The Device Masking and Mapping Masking dialogue box allows you to mask LUNs to the selected initiator HBA or CNA and refresh the VCMDB in one step as shown in Figure 18 Drector Pon fA 260 z Oresto 100000004 keste Select nistor Alles 10000000c53c 181 c 1 Q000000c 3c 81 c Host Alas det 71 include Devices Not Mapped fo this Port in A valable Devices Avetatie Devices Y F LT RED 72 of 114 tens Torget Oevices Dev Dew Config LUN Maen Cap MB oe Ber Config Um nen Gone gom dwy 000 9 gt QUA Vey Mr 0 109 SOTA 2 WyMr ca 109 px em 1m QUE Wey Mr E 10 QUD eye mo 109 AUC ey e 1001 QUE 20e Mr on 1081 JpUto Way so 108 QOI ww 02 1051 Add e QE www en
18. ELN at http elabnavigator EMC com For VMware specific documentation such as the VMware ESX Server Release Notes ESX Server Administration Guide and the ESX Server Installation Guide go to http www VMware com support EMC Host Connectivity Guide for VMware ESX Server For a list of supported guest operating systems refer to the VMware Guest Operating System Installation Guide located at http www VMware com pdf GuestOS_guide pdf Conventions used in EMC uses the following conventions for notes and cautions this guide Note A note presents information that is important but not hazard related IMPORTANT An important notice contains information essential to operation of the software Typographical conventions EMC uses the following type style conventions in this guide Bold Used for names of interface elements such as names of windows dialog boxes buttons fields tab names key names and menu paths what the user specifically selects or clicks Italic Used for full titles of publications referenced in text Courier Used for e System output such as an error message or script System code Pathnames filesnames prompts and syntax Commands and options Courier bold Used for user input Courier italic Used for variables lt gt Angle brackets enclose parameter or variable values supplied by the user Square brackets enclose optional values Vertical bar indicates alternate selec
19. Effectively enable new TDATs on all existing pool drives and add the same number of data device as the original pool size TDAT must Virtual Provisioning with VMAX or Symmetrix Virtual Provisioning Virtual Provisioning have the same protection and consist of the same emulations as those in the pool For pool expansion it is recommended to add TDATs from drives of the same RPM equal size data drives equal amount of space from underlying drives and spread across DA and physical drives It is preferable to allocate more TDATs to a large pool symconfigure sid sid cmd bind TDEV devID devID to pool poolNAME commit nop symconfigure sid 098 cmd bind TDEV 18fe 18ff to pool TP commit nop A Configuration Change operation is in progress Please wait Establishing a configuration change session Established Processing symmetrix 000194900098 Performing Access check8 i2222 94 Ra ES ERES CUR Allowed Checking Device ReservationsS sse Allowed Lockeng QEV eS uisus er Es EIE S Eie pue PRA depu ad Locked Committing configuration changes eee eee Started Binding deviceSuasuiuhkedebaeesosaoe mp a a a aa a case coded Done Committing configuration changeS 2 eee ee eee eee Committed Terminating the configuration change session Done The configuration change session has successfully completed Verify the thin pool con
20. File Edt View Favorites Tools Help Om O HAG Duo drm e isole html Microsoft Internet Explorer 79 195 smsflexjVPlexConsole html a Jor amp 3 EMC V Plex Management Console v4 0 Tees m g Enter name host type and port WWN to register the undiscovered initiator E Initiator Properties Provision Storage v Distributed Storage cluster 1 initiators Gc OibebuedDewes qq ins ss Initiator Name test brocade i EERIE Host Type default orocade 1 Name 14 Host Type Node WWN Port WWN aix l82bc094 fcsO default 0x10000000c974a5a0 0x100000 BE acad ded aix l82bc094 fest default 0x10000000c974a5a1 0x100000 Fert inns AE aix 182b0095 feso default Ox10000000c974adbe 0x100000 Target por Nene Camel aix l82bc095 fcsl default 0x10000000c974a4bf 0x10000000c974a4b hpintos4 2 hpux 0x500110a000183865 0x500110a00018386 ok cancel _ Reset gt hpintoS6 1 hpux OxS0060b00006035aS 0x50060b00006035 extents hpintose 2 hpux 0x50060b00001d22f5S 0xS0060b00001d22f4 There are no target ports for the selected itiotor Y GoPhysical Storage x hpintOS7 1 hpux OxS0060b0000604c09 0x50060b0000604c08 fy Storage Volumes i hpint057 2 hpux 0x50060b00003b7105 0x50060b00003b7104 _J Storage Arrays io hpinto70 1 hpux Ox20000000c9880567 0x10000000c9880567 zu bointozo 2 hpux Ox20000000c9880566 0x10000000c9880566 d E Storage Vie
21. initiator 8 Add target addresses for hardware initiator both send and static targets discovery are supported a Click the Dynamic Discovery tab to add send targets and then click Add b Enter the send targets server IP and click OK to add target information from a selected storage system Or a Click the Static Discovery tab to add static targets and then click Add b Enter the send targets server IP port and iqn address and click OK to add 9 Click Close to close the iSCSI Initiator Properties page Once the iSCSI initiator ports on the ESX Server are configured iSCSI storage must be presented to the ESX Server Please refer to the latest EMC Support Matrix for EMC arrays that are currently supported through iSCSI attach to VMware ESX Server 3 x and 4 x VMAX or Symmetrix connectivity The following bit settings are required on the director ports for ESX ESXi operations Common serial number C SCSI3 SC3 set enabled Unique world wide name UWN EMC Host Connectivity Guide for VMware ESX Server SPC 2 Decal SPC2 SPC 2 flag is required The bit settings can be configured using EMC Symmetrix Management Console or Solutions Enabler Refer to VMAX and Symmetrix array configurations on page 94 for more details This section details the steps needed for Using Solution Enabler on page 65 n Using Symmetrix Management Console SMC on page 66 Using Solution En
22. the correct SP and does not trespass at any time trespass of the boot LUN would result in a system crash When attaching the VMware host to the VNX series and CLARiiON storage array use the adapter in the lowest numbered PCI slot in the server To keep the configuration and installation simple it is recommended that only that HBA be zoned to the array Ensure that the boot BIOS or firmware has been applied to the HBA and that the boot BIOS has been enabled on the HBA to be used for SAN boot Prior to the installation the VMware ESX Server must have been registered on the array and assigned to a Storage Group If using version v2 1 x and prior the host s HBAs will need to be manually registered on the VNX series and CLARiiON storage systems The boot LUN chosen for the VMware ESX Server should have the lowest possible LUN ID below 256 234 EMC Host Connectivity Guide for VMware ESX Server Cautions Restrictions Boot from SAN At least one LUN must be bound to the host s Storage Group and owned by the SP connected to the HBA being used for the fabric boot The lowest numbered path to the boot LUN must be the active path For ESX 3 0 and earlier since the lowest numbered path to the boot LUN must be active attaching to the VNX series and CLARiiON system via direct connect or FC AL is not supported Only FC SW environments are supported when booting from the SAN It is required that the boot LUN be assigned Host LUN ID 0
23. 1001 GTO 23Ney Mr 0613 1051 Add AM v QOF ey er ex 1051 QM 20e Mr os 108 Reese o LLL SQ ay Mr os 1051 coRenove Aa OTI aay Mr 9x 1001 Q ey Mr me 1031 Q eye ens 105 QOH oye 7 109 JU vr e 1n QOO ey Mr oe 109 ore 2 yw e 109 OT www ee 1n QOS ey Mr x 109 QUO 2 Mr ma w G De awy ew tm Page t ott BE eme 110736675 I Page vot BE tene 110290629 DI Lll Set Drame LUN Address Apot OK Cancel Help Figure 18 Device Masking and Mapping Masking dialog box If Dynamic Addressing is enabled you can manually assign the host LUN addresses for your newly added LUNs on demand by clicking on Set Dynamic LUN Address Otherwise SMC will select next available host LUN address for the devices 100 EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration Figure 19 shows the Masking Set LUN Addresses dialog box E Masking Set LUN Addresses Gererated LUNS ore in blue double click to edit LUNs already mapped are in black Dev 5 Mappings Cap MB LUN Num Host LUN rcc 6 1091 oor 018 iz QEA 6 1031 020 008 orte 6 1031 02 oc Qoc 6 1031 oF 000 orto 6 1031 030 not orte 6 1031 031 oor ort 6 1031 032 pr Qro 6 1051 033 on porri 6 1031 034 m2 Qor 6 1031 035 on rj Rezet Start Host LUN Address 01B Next Avalabie Apply OK Cancel Hep Figure 19 Masking Set LUN Addresses dialog box VMAX and Symmetrix array configurations 101 Arra
24. Click OK to apply the changes To remove the XtremIO VAAI module from the ESX host only in vSphere 4 1 complete the following steps 1 Check the module name to locate the VAAI module for XtremIO using the following command esxupdate vib view query grep xio 2 Remove the VAAI module for XtremIO using the following command esxupdate remove b cross xio vaaip 410 1 0 000020 maintenancemode Configuring Fibre Channel HBA Prerequisites When using Fibre Channel with XtremIO the following FC Host Bus Adapters HBA issues should be addressed for optimal performance To install one or more EMC approved HBAs on an ESX host follow the procedures in one of these documents according to the FC HBA type For QLogic and Emulex HBAs Typically the driver for these HBAs is preloaded with ESX Therefore no further action is required Refer to the vSphere and HBA documentation for further details EMC XtremlO BEEN Array Information and Configuration Array Information and Configuration Queue depth and I O throttle For Cisco USC FNIC HBAs vsphere 5 x only Refer to the Virtual Interface Card Drivers section in the Cisco UCS Manager Install and Upgrade Guides for complete driver installation instructions available at http www cisco com en US partner products ps10281 prod in stallation guides list html Note Changing the HBA queue depth is designed for advanced users Increasing queue de
25. D eestor 16 FOD nya irre t lerem e d HD Gta t 4 Lasi Actres Logh 2008 07 02 140302 HED Disks 4 Ml a A I uf a A PAK ABD I9 A98 A aN aeo A VNX series and CLARiiON connectivity For a failover enabled environment the required settings are as lt Properties 000120300963 Location 2000 04 cor ogis getea 72 hbst SE200 l Connectivity Oig ake HBA Port Figs TJA follows Initiator Type CLARiiON Open ArrayCommPath Enabled FailOverMode 1 Unit Serial Number Array Note The failover functionality referred to here is the native failover functionality incorporated into the VMkernel In order to manually register the host you must first identify the iqns of the initiators iSCSI name and iqn refers to the same thing The iqn iSCSI is used when registering the VMware ESX Server host onto the VNX series and CLARiiON systems The iqn can be found from VI VSphere client Inventory panel gt Configuration tab Storage adapters To manually add VMware ESX host to VNX series and CLARiiON complete the following steps 1 Log in to the management interface from a browser Note Java is required 2 Select the Storage tab right click on the appropriate array and select the Connectivity Status option as shown in the next figure File View Tools Tasks Window Help 4 B I Enterprise Storage 1 Filter By All v Storage Hosts Monitors Storage Domains Rif Local Domain 10 32 1
26. EMC com for information about how to mask a LUN E EMC Host Connectivity Guide for VMware ESX Server Be Comro Hep Synnott Anaye comana Local General Daviocs 10 T Syonutr Masking Drato Werbticaton SE 2C E Hoe Bus Adopter e p ed Dirsckor Pori a 2 thre 219 Guiscsi 18 Int eter ig 2000 24 eom oge tee eni 17 220p 1 j C kn 1988 01 com tees e sgeil SE 200 Type ISS tc in 2000 04 com oae need SE 2C 0 Inietcr Ajo faeit T zin bet ce wn 2000 04 com qiogic sgehemed Ahha SEN sbuty Mo c kp 2000 046 com cpocic sgema 3 n8 SE 2C0 FOD Lockdown WA win tel com MIRI SE 200 roD vsus wa i i 2000 0 Con Gee 183 ned SE 200 LUN Ofset hio c an 2000 04 com qiogic systemaat Phbs SE 00 0 Offset Value N A C n 2000 04 congo sgetvmav 72 hio SE 20r0 Base Value ns i C ien 2000 04 com goce sete 73 h 1 SE 20r0 c in 1908 01 com wrote sgarait SE 15CD Heberogenecus Host No c ien 2000 04 com qogic agste Phbs SE 156 0 Host Tape WA 1 i ign 2000 04 com qjogic sgetermad 72 hs SE 15C 0 Port Flag Overrides No J C Ii 2000 04 com cpegec eget 73 hios2 SE 15C 0 Eraio HEA Port Fiegs TJA c in 1981 05 com microsoft 5geli2 S4 SE 1 50 tz Kn 1988 01 com wmeora sgarar SE 1 50 0 co n 2000 04 com qiogic sgebermed 0pbs SE 15D 0 D ynvatic amp doe asina Ya i km 2000 06 con qoae sgetemw72 his2 SE 1 30 0 FUMER m c kp 2000 04 com goe sgema 73 noa2 SE 1 S00 pam Do Host Alias 10 IP Ajdineae 11 1010247
27. EN EMC Host Connectivity Guide for VMware ESX Server FCoE initiator configurations Configuring Emulex FCoE CNAs with VMware ESX Server Installing the CNA This section provides information on configuring the Emulex QLogic Brocade and Cisco FCoE converged network adapters CNAs with the VMware ESX Server FCoE adapters represent a method to converge both Fibre Channel and Ethernet traffic over a single physical link to a switch infrastructure that manages both storage SAN and network IP connectivity within a single unit This section contains the following information Configuring Emulex FCoE CNAs with VMware ESX Server on page 71 Configuring QLogic FCoE CNAs with VMware ESX Server on page 80 Configuring Brocade FCoE CNAs with VMware ESX Server on page 85 Cisco Unified Computing System with FCoE on page 90 Configuring FCoE for Intel Ethernet Server Adapter with VMware ESX server on page 92 This section provides information on configuring Emulex FCoE CNAs with the VMware ESX Server including Installing the CNA on page 71 Updating the Emulex CNA firmware and boot BIOS on page 72 Updating the Emulex CEE Menlo firmware on page 75 Installing the Emulex CNA driver on page 76 To install the CNA 1 Install VMware ESX Server 3 5 update 2 or later on your server For instructions on installing the operating system refer to the relevant documentatioin
28. ESX host in the VNX series and CLARiiON environment e VNX series and CLARiiON failover modes on page 106 Adding the VMware ESX server host to a storage group on page 109 e Performing a VNX series and CLARiiON NDU with VMware ESX server hosts on page 110 Manual trespass on VNX series and CLARiiON systems to recover the original path on page 117 VNX series and CLARiiON failover modes ALUA failover mode behavior VNX series and CLARiiON systems with Asymmetric Logical Unit Access ALUA mode is supported beginning with ESX 4 0 For VNX series and CLARiiON systems the default failover mode for ESX host is failover mode 1 with the storage type as Active Passive When ESX host is registered in failover mode 4 ALUA mode is enabled In such a case the VNX series and CLARiiON systems will behave similarly to an Active Active array The ESX server applies the Fixed policy to VNX series and CLARiiON devices in a VMware native multipathing environment by default Note the following ESX 3 5 and earlier do not support ALUA mode ESX 4 0 and later with VMware native Multipathing fixed policy supports EMC FLARE 04 28 00 5 704 or later in ALUA mode e ESX 4 0 and later with PowerPath VE supports VNX block and FLARE 03 26 or later in ALUA mode e VNX series supports ALUA Starting with FLARE release 03 26 EMC introduced the Asymmetric Active Active feature for VNX series and CLARiiON systems This change
29. LUN Name 1 VNX180171 LUN ID 209 Unique ID 60 06 01 60 3A A40 29 00 F6 33 11 0E F5 4F E2 11 Current State Faulted Miscellaneous RAID Type RAIDS Drive Type Mixed RAID Group 3 Capacity User Blocks 10485760 User Capacity 5 GB 5 368 709 120 bytes Raw Capacity 6 265 GB 6 726 615 040 bytes Advanced Percent Created 100 Element Size 128 Percent Rebuilt 100 Rebuild Priority HIGH Y Verify Priority MEDIUM v Ownership Current Owner SPA p Default Own spa Osere OK Apply Cancel Help Figure 29 View LUN properties As can be seen in the General tab above LUN 196 is originally owned by SP A In this instance LUN 196 has been trespassed to SP B 6 For the host in this example both LUN 192 and 196 are owned by SP A while LUNs 193 and 197 are owned by SP B A trespass has occurred and currently all four LUNs are located on SP B EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration In order to trespass LUNs 192 and 196 back to SP A right click on those LUNs In the menu that is presented see Figure 30 for an example select the option to Trespass Create Clone Group Add to Storage Group Update Host Information Expand Delete Migrate Trespass Select Folders MirrarView gt SAN Copy gt Snapshot gt Analyzer gt Properties Figure 30 Menu 7 With the appropriate LUNs are highlighted select the option to
30. LUNs Name 10 Capacity Drive Type Host ID LUN 57 57 20 000 GB SATA II 2 LUN 256 256 3 583 GB SATA II 10 LUN 258 268 3 583 GB SATA It 11 LUN 260 260 3 583 G8 SATA II 12 time oe raa cor re pavan 9 Y Warning HLU numbers higher than 255 may result in application outages if not supported by the host falover software ok Apr Cancel Help Virtual Provisioning with VNX series and CLARION EE Virtual Provisioning 6 Rescan HBA to make sure the LUN has been presented to ESX Server 7 Inthe Virtual Center populate the thin LUN to one Linux VM asa path through RDM LUN 8 Inthe console mode of the Linux VM attach the thin LUN partition it and make file system on it Issue the following commands echo gt sys class scsi_host host0 scan disk dev sdb1 assume the LUN is sdb imkfs ext2 dev sdb1 9 Mount the LUN in VM Issue the following commands mkdir mnt test mount dev sdb1 mnt test 10 In Linux VM use the df h command to show the volume usage Filesystem Size Used Avail Use Mounted on dev mapper VolGroup88 LogVol80 5 8G 3 0G 2 66 54 dev sdal 99M 15M 80M 16 boot tmpfs 188M B 188M 0 dev shm dev sdbl 1808G 200M 957G 1 mnt test Using the Virtual provisioned device Once the thin pool is full ESX will stop writing to the device and the array will prompt a critical error message when the pool threshold is reached During the process you need to monitor var log messa
31. PITSITAS f A 2A 1 s POP mek FA 2A t SOR Cae P A A 1 TOLA 1A A t p SPA MORN FA 14A p PAIET FAAS u PRO nite FA 14A ONT FA 4A 1 SOP 1000000 004 F 14A 5 SXISZZIEHUELSUSILE i Figure 3 SMC Properties verification screen EMC Host Connectivity Guide for VMware ESX Server Another tool that can be used to monitor the VMAX or Symmetrix is Solutions Enabler which unlike the SMC uses a command line interface CLI Once the tool is installed on the ESX Server the following command can be used to verify that the zoning was successful symmask sid lt last 3 digits of the symm frame gt list logins dir lt dir gt port lt port gt Figure 4 Verification of zoning from initiator side In Figure 4 the initiators that have been zoned are listed Under the On Fabric column heading you must see Yes listed beside your initiators to verify zoning bs ynnask Symmetrix ID identifi Port Director Director Identifier 10000006ct974d189 934bicS 93d3Bal 93dFbbe c6135ce 2121321 031 8Ude 6686048 5806048 acd201641 566 048 adS 01 SOF In sau 21 088Ge 88 b f Director Director Port Identifier 4b188 4bi1c5 LED KL EY C ELRRLDOSS 81308 9800779 1 BAHAA 1 BAABA 1 PPPOA 1 BBAHE 50060 iri Director Director Port Identifier p 069508080 Solution Enabler cation Ident if ic Ident if ic t login 50001906300236 FA 1C 8 Logge
32. Round Robin ad default pathing policy for all XtremlO volumes for vSphere 4 1 using ESX command line To configure vSphere NMP Round Robin as the default pathing policy for all XtremIO volumes all volumes for vSphere 4 1 using the ESX command line complete the following steps Note It is recommended to apply this method before presenting volumes to the ESX host 1 Open an SSH session to the host as root 2 Use one of the following commands to configure the default pathing policy for newly defined XtremIO volumes to round robin with path switching after each I O packet Version Command vSphere 5 x esxcli storage nmp satp rule add c tpgs off e XtremIO Active Active M XtremApp P VMW_PSP_RR O iops 1 s VMW SATP DEFAULT AA t vendor V XtremIO vSphere 4 1 esxcli nmp satp setdefaultpsp psp VMW PSP RR satp your SATP name EMC XtremlO Array Information and Configuration Configuring EMC PowerPath multipathing For vSphere 5 x this command also sets the v5phere NMP round robin path switching frequency for newly defined XtremIO volumes to 1 For vSphere 4 1 this command configures vSphere NMP round robin as the default pathing policy for all the new volumes presented to the ESX host Therefore it is recommended to use it when only XtremIO storage is presented to the host For more details refer to VMware KB article 1017760 on the VMware website at http kb vmware com selfservice microsites sea
33. Symmetrix connectivity Note Refer to the EMC Support Matrix or contact your EMC representative for the latest information on qualified hosts host bus adapters and connectivity equipment The VMAX and Symmetrix system is configured by an EMC Customer Engineer via the VMAX and Symmetrix service processor EMC Host Connectivity Guide for VMware ESX Server Example 1 The EMC Customer Engineer CE should contact the EMC Configuration Specialist for updated online information This information is necessary to configure the VMAX or Symmetrix system to support the customer s host environment After the EMC CE has assigned target IDs and LUNs and brought the VMAX or Symmetrix channel and disk directors online reboot the network operating systems and go into the configuration program Note All qualified HBAs are listed in the EMC Support Matrix Note that the VMware ESX Server installer will recognize LUNs 25 MB or less as management LUNs This includes any gatekeepers assigned to the VMware host via Solutions Enabler Two possible configuration scenarios are described in the following two examples Example 1 on page 35 Example 2 on page 36 In this example as shown in Figure 1 on page 36 one host with two HBAs is attached to one VMAX or Symmetrix array using two separate switches The zones should be composed of a single initiator and a single target so they would be created with one HBA and on F
34. System 90 Unified Computing System 90 Unified Computing System UCS 90 EMC Host Connectivity Guide for VMware ESX Server V vCenter Server 23 Virtual LUN architecture features 191 Virtual Machine File System VMFS 193 221 Virtual Provisioning 186 Virtual Provisioning with Symmetrix 208 Virtual Provisioning with VNX CLARiiON 195 VMFS datastore 222 VMBS Virtual Machine File System 193 VMkernel 19 VMware ESX HW iSCSI configuring 63 VMware ESX Server 19 VMware ESX Server utilities and functions 21 VMWare ESX SW iSCSI initiator ports configuring 51 VMware Web UI 23 VNX CLARiiON connectivity 40 VNX CLARiiON NDU 110 VNX CLARiiON storage arrays booting from 234 VNX CLARiiON connectivity 67 Volume Logix 232 vSphere 18 W World Wide Port Name WWPN 94
35. Virtual Machine File System with thin LUN This section provides information on the VMware Virtual Machine File System VMFS with Thin LUN for ESX v 3 x and v4 x ESX 3 x The VMES has some useful characteristics when viewed in a Virtual Provisioning context First a minimal number of thin extents is allocated from the thin pool when a VMFS is created on virtually provisioned devices The amount of storage required to store the file system metadata is a function of the size of the thin device VMFS does not write all of its metadata to disks on creation The VMFS formats and uses the reserved area for metadata as requirements arise Using the zeroedthick allocation method in VMware the storage required for the virtual disks is reserved in the datastore but the VMware kernel does not initialize all the blocks The blocks are initialized by the guest operating system as write activities to previously uninitialized blocks are performed The VMware kernel provides a number of allocation mechanisms for creating virtual disks and not all of them are Virtual Provisioning friendly The eagerzeroedthick format is not ideal for use with virtually provisioned devices The thin allocation policy is somewhat Virtual Provisioning friendly However the risk of exceeding the thin pool capacity is much higher when virtual disks are allocated using this policy since the oversubscription to physical storage occurs at two independent layers that d
36. are created with one HBA and one SP port For instance e HBAOis zoned to one SPA port on each of the four systems HBAL1 is zoned to one SPB port on each of the four systems Figure 6 One host two switches and four VNX series or CLARiiON systems Note AII qualified HBAs are listed in the EMC Support Matrix When assigning VNX series or CLARiiON LUNs to a VMware ESX Server host the LUNs may be assigned to only one SP or their assignments may be split between the two SPs Either configuration is valid Fibre Channel Identifying the WWNs of the HBAs lpfc0t00 lpfc0t01 DI DI Consider the following methods The recommended method to discover WWNs is to run the wwpn pl command for either OLogic or Emulex HBAs For each vmhba instance the wwpn p1 will provide the corresponding QLogic or Emulex WWPNs For example root 182bi199 usr sbin wwpn pl vmhba2 210000e08b0910a7 QLogic 6 1 0 vmhba3 210100e08b2910a7 QLogic 6 1 1 An alternate method to obtain Emulex HBAs initiator and target information is to refer to proc scsi lpfcdd N where N indicates the file for each adapter in the system when the driver is loaded By grep ing the file s the necessary information to register to host will be reported grep the file to obtain the initiator and target information For example grep DID proc scsi lpfcdd 1 produces output similar to the following for the first Emulex HBA
37. at http www VMware com support pubs 2 Power off the host system FCoE initiator configurations 3 4 With the host system powered off install the CNA and make the necessary cable connections as instructed in the accompanying CNA documentation or on the Emulex website at www emulex com The card installs into a single PCI bus slot Reapply power and allow the system to boot normally Updating the Emulex CNA firmware and boot BIOS Using HBAnywhere To update the Emulex CNA firmware and boot BIOS for LightPulse adapters use HBAnywhere using the following steps 1 If you do not already have the Emulex HBAnyware Applications kit installed on the ESX Server download the latest available Emulex HBAnyware Applications kit that is compatible with the ESX Server version that you are running from www emulex com Install the HBAnyware CLI on the ESX Server a Log in as root at the ESX Server console or in a secure shell session to the ESX Server b Copy the elxVMwarecorekit lt kit version gt rpm file to a directory on the install machine c Using the cd command change to the directory where you copied the rpmfile d Install the rpm Type rpm i elxVMwarecorekit lt kit version rpm The rpm contents are installed in usr sbin hbanyware The hbacmd utility is also located in this directory Copy the appropriate firmware and BIOS files from the CD ROM or the Emulex website to the appr
38. be used for boot 232 EMC Host Connectivity Guide for VMware ESX Server Cautions Restrictions Boot from SAN Prior to the installation on a VMAX or Symmetrix LUN the Linux host HBA must have successfully logged into the array Using Solutions Enabler from another host at least one LUN must be assigned to the host During the installation procedure it is recommended but not required that only one LUN be allocated to the host for ease of use After the installation has completed additional LUNs can be assigned to the host For ESX 3 x and later the installation is based on a modified Red Hat version 2 4 21 or newer kernel Hence the commands bootfromsan or bootfromsan text are not necessary IMPORTANT If VMware loses all paths to the array for a long enough period the disks disappear from the system A hard reboot is required to bring the system back to a usable state Any of these events could crash a system booting from a VMAX or Symmetrix storage array Lost connection to the VMAX or Symmetrix system pulled or damaged cable connection e VMAX or Symmetrix service and upgrade procedures such as on line VMAX or Symmetrix microcode upgrades and or configuration changes VMAX or Symmetrix director failures including failed lasers VMAX or Symmetrix system power failure Storage area network service upgrade procedures such as firmware upgrades or hardware replacements For ESX 3 x and late
39. blade to an attached network or SAN These CNAs are based on currently supported PCI Express CNAs that EMC supports in standard servers and use the same drivers firmware and BIOS to provide connectivity to both EMC Fibre Channel and iSCSI storage array ports through the UCS Fabric Extenders and Fabric Interconnect switches that provide both 10 Gb Ethernet and or Fibre Channel EMC Host Connectivity Guide for VMware ESX Server Connectivity Pen ifef 99 UCS Managor cisco E Cisco UCS 2 100 Series Fabric Extender Up to 2 in Each Chassis Cisco UCS B Series Biodo 2 Types Cisco UCS Notwork Adaptors Optimized for Efficiency and Performance ma Compatibility Virtualization Figure 16 Cisco Unified Computing System example In depth information about UCS and how it utilizes FCoE technology for its blade servers can be found in the Cisco UCS documentation at http www cisco com The UCS Fabric Interconnect switches are supported with the same supported configurations as the Cisco NEX 5020 Refer to the Fibre Channel over Ethernet FCoE TechBook located E Lab Interoperability Navigator ELN at http elabnavigator EMC com under the Topology Resource Center tab for information on supported features and topologies FCoE initiator configurations m Configuring FCoE for Intel Ethernet Server Adapter with VMware ESX server This section provides information for configuring FCoE for I
40. configuration When host configuration is completed you can use the XtremIO steps storage from the host Refer to Managing Volumes for further information on creating presenting and managing volumes that can be accessed from the host via either GUI or CLI EMC Virtual Storage Integrator VSI Unified Storage Management version 5 6 1 can be used to provision from within vSphere Client Virtual Machine File System VMFS datastores and Raw Device Mapping volumes on XtremIO Furthermore EMC VSI Storage Viewer version 5 6 1 extends the vSphere Client to facilitate the discovery and identification of XtremIO storage devices allocated to VMware ESX ESXi hosts and virtual machines For further information on using these two vSphere Client plug ins refer to the VSI Unified Storage Management product guide and the VSI Storage Viewer product guide File system and application requirements This section contains information on the following requirements Disk formatting on page 162 Virtual Machine formatting on page 163 Space reclamation on page 164 9 9 Out of space VM suspend and notification with Thin Provisioning IPSTUN on page 165 Disk formatting When creating volumes in XtremIO for a vSphere host consider the following e Disk logical block size The only logical block LB size supported by vSphere for presenting to ESX volumes is 512 bytes Add New Volumes Fa n the new cohumes table by creating each
41. device device_NAA To set the I O operations limit esxcli storage nmp psp roundrobin deviceconfig set device device_NAA iops 1 type iops For ESXi ESX 4 x To check the I O operations limit esxcli nmp roundrobin getconfig device device_ NAA To set the I O operations limit esxcli nmp roundrobin setconfig device device_NAA iops 1 type iops For more detailed information refer to the VMware Knowledge Base article EMC VMAX and DMX Symmetrix Storage Array recommendations for optimal performance on VMware ESXi ESX 2072070 located at http kb vmware com kb 2072070 For more information on EMC recommended settings refer to the following documentation location at http support emc com Tuning the VMware Native Multipathing Performance of VMware vSphere Hosts Connected to EMC Symmetrix Storage EMC White Paper EMC Knowledge base article emc7440 How to properly tune an ESX server that is connected to a Symmetrix For related information refer to the following VMware Knowledgebase article Changing the default pathing policy for new existing LUNs 1017760 located at http kb vmware com kb 1017760 EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration VMAX and Symmetrix array configurations 105 Array Information and Configuration ESX host in the VNX series and CLARIION environment This section provides the following information about the
42. format instead of the MSDOS style partition label This change supports ESXi installation on disks larger than 2 TB Newly installed vSphere 5 hosts use VMFS5 an updated version of the VMware File System for vSphere 5 Unlike earlier versions ESXi 5 does not create VMFS partitions in second and successive disks Upgraded systems do not use GUID Partition Tables GPT but retain the older MSDOS based partition label Note Partitioning for hosts that are upgraded to ESXi 5 differs significantly from partitioning for new installations of ESXi 5 Refer to the vSphere Upgrade documentation at http www VMware com support pubs Installation Installation of ESX ESXi Server Initiator setting and installation Host Bus Adapter HBA VMware supports QLogic Emulex and Brocade based Fibre Channel host bus adapters with EMC storage Refer to EMC Support Matrix for the most up to date supported HBA models Both the QLogic iSCSI hardware initiator and the generic networked interface card NIC iSCSI software initiator are supported with EMC iSCSI storage arrays Refer to the Linux iSCSI Connectivity section of the EMC Support Matrix for supported configurations and required driver revision levels An EMC published QLogic iSCSI guide is available on the QLogic website Native iSCSI release notes are available on http support EMC com This section provides information on the following Host Bus Adapter HB
43. has changed since the 3 5 version of driver CD In ESX 3 5 the driver CD can be used as the boot CD For ESX 4 0 and later you will need the ESX installation DVD to begin For ESX v3 5 Use this CD as a boot CD only when the new driver must enable the target device on which ESX will be installed This procedure can only be used with ESX 3 x FCoE initiator configurations 1 Place the driver CD in the CD ROM drive of the host machine 2 Start the host machine 3 When prompted for an upgrade or installation press Enter for graphical mode 4 Choose the language you prefer 5 Select a keyboard type 6 After you are prompted to swap the driver CD with the ESX 3 5 installation CD insert the ESX 3 5 installation CD and continue with ESX installation For ESX v4 x 1 Place the ESX Installation CD in the CD ROM drive of the host machine 2 Start the host machine 3 Accept the terms of the license agreement 4 Select a keyboard type 5 When prompted for Custom Drivers select Yes to install custom drivers 6 The installer prompts you to insert the media containing the custom drivers After you add the custom drivers to the list the installer prompts you to reinsert the ESX installation DVD and continue with the installation Update or add drivers on existing ESX installations using ESX update for ESX only To update or add drivers on existing ESX installations using ESX update 1 Power on t
44. maximums for VMFS can be found in vSphere documentation available at http www VMware com Size limits Virtual Machine File System Virtual Machine File System 226 EMC Host Connectivity Guide for VMware ESX Server D Raw Disk Usage and Mapping This appendix provides information about raw disk usage and mapping RaW disk usasegnd cavetur aids esa anr 228 Raw Device Mappings DM iu eiecit iere 229 Raw Disk Usage and Mapping 227 Raw Disk Usage and Mapping Raw disk usage and caveats Raw disk non RDM is used only in ESX Server versions previous to 2 5 For ESX Server 2 5 and greater please use RDMs for direct access to SAN LUNs Please skip to the Raw Device Mappings RDM on page 20 There are occasions where the Virtual Machine configuration file will need to be manually edited when using directly accessed raw devices and not RDMs If an event occurs that forces a change in the paths to the devices then it is possible that access to a raw disk device may be lost due to a dead path In such a case as this the Virtual Machine configuration file will need to be modified to reflect a usable path for the raw device For instance if all the LUNs referenced by the Virtual Machine are still accessible by the VMware ESX Server but the original paths have changed due to a problem such as a lost cable then the Virtual Machine configuration file will need to be edited Example A
45. module Remote PowerPath CLI rpowermt server The PowerPath VE driver emcp module resides in local ESX host within kernels so that multipathing and load balancing functions are transparent to VMware and Guest OS running on VMware The rpowermt CIM client enables communication between the VMware ESX host and the rpowermt host for the management of PowerPath VE using the remote CLI called rpowermt The Remote PowerPath CLI rpowermt server is a machine on which the PowerPath remote multipathing CLI and licenses are installed allowing you to configure and manage PowerPath VE for the ESX host and to carry out licensing activities remotely using powermt commands Electronic License Management ELM is used for PowerPath VE in VMware ESX 4 x and ESXi 5 PowerPath VE supports both VMware ESX and ESXi hosts It works with FC HBAs software iSCSI initiators and hardware iSCSI initiators Supported storage types Supported EMC arrays include VMAX Symmetrix VNX series and CLARiiON Celerra VPLEX PowerPath VE supports three types of storage type e Active Active VMAX or Symmetrix and supported third party array systems e Active Passive VNX series and CLARiiON ALUA e VNX series supports ALUA e CLARiiON supports ALUA mode only with FLARE 26 and later 178 EMC Host Connectivity Guide for VMware ESX Server Multipathing in VMware ESX ESXi Server e PowerPath VE supports ALUA on CLARiiON with all FLARE versions th
46. native round robin path selection policy on XtremIO volumes presented to the ESX host Set the vSphere NMP round robin path switching frequency to XtremIO volumes from the default value 1000 I O packets to 1 These settings ensure optimal distribution and availability of load between I O paths to the XtremIO Storage Note Use ESX command line to adjust the path switching frequency of vSphere NMP Round Robin 156 EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration To set vSphere NMP Round Robin configuration select one of the following options Per volume using vSphere Client for each host where the volume is presented Per volume using ESX command line for each host where the volume is presented Per host for all XtremIO volumes presented to the host using ESX command line The following procedures detail each of these methods Configuring vSphere NMP Round Robin on XtremlO volume in ESX host using vSphere Client To configure vSphere NMP Round Robin on an XtremIO volume in an ESX host using vSphere Client 1 Launch the vSphere Client and navigate to Inventory gt Hosts and Clusters In the Inventory locate the ESX host Click the ESX host icon in the left pane and click Configuration From the Hardware section click Storage Adapters ga SN From the Storage Adapters list click the storage adapter through which the XtremIO volume is present
47. next window shown by the Wizard Figure 25 on page 114 allows a change to the default 360 second delay This value may be changed as indicated in the window Note Please note that reducing the value to one that is too low may incur data loss ESX host in the VNX series and CLARIION environment 113 Array Information and Configuration Storage System Software Installation CX700_3 12 70 Laud Df xd r Non Disruptive Upgrade Delay Non disruptive upgrade delay allows attached hosts enough 1 lime after SP B has finished rebooting to rescan and mark paths available to SP B before SP A reboots LJ in i i The default value is 350 seconds wow For hosts running Veritas Volume Manager DMP enter the maximum path scan interval of the attached hosts The minimum allowed delay is 360 seconds and the maximum allowed is 1140 seconds If you are unsure of the scan interval please consult the Veritas documentation 360 Seconds Caution If this information is incorrect you may lose access to your data for the duration of the software installation Finish Cancel Figure 25 Non Disruptive Upgrade Delay window During an NDU SP B is upgraded first and rebooted After SP B has finished rebooting the Wizard provides a built in delay of 360 seconds This delay should provide enough time for the hosts attached to the array to rescan the SCSI bus and mark the paths to SP B available before rebo
48. patch an ESX ESXi host from the command line you must have access to a machine on which you can run the VMware vSphere Command Line Interface vSphere CLI You can install the vSphere CLI on your Microsoft Windows or Linux system EMC Host Connectivity Guide for VMware ESX Server or import the VMware vSphere Management Assistant vMA virtual appliance onto your ESX ESXi host To update or add drivers on existing ESX and ESXi installations using the vihostupdate command 1 Power on the ESX or ESXi host 2 Place the driver CD in the CD ROM drive of the host where either the vSphere CLI package is installed or vMA is hosted 3 Mount the driver CD 4 Navigate to cd mount point offline bundle and locate the lt offline bundle gt zip file 5 Runthe vihostupdate command to install drivers using the offline bundle vihostupdate server lt test machine address gt install bundle lt offline bundle gt zip For more details on the vihostupdate command see the vSphere Command Line Interface Installation and Reference Guide located at http www VMwate com For ESX 3 5 U2 and later see the Remote Command Line Interface Installation and Reference Guide located at http www VMware com The installation of the 2 0 driver will also install update the firmware on the CNA Following the installation of the proper driver for the FCoE adapter it will function identical to that of a standard Brocade Fibre Channel H
49. platform For more information on vSphere refer to VMware vSphere on page 18 Upon release of vSphere 4 0 VMware VirtualCenter is renamed to be vCenter Server VMware vCenter Server is a scalable and extensible platform to manage a virtualized environment vCenter Server is a Windows Service running automatically in the background monitoring and managing activities even when VI vSphere Clients are disconnected or not in use VirtualCenter can manage more than one host and it must be accessible by each host and machines running the Virtual Infrastructure vSphere Client Generally new versions of VirtualCenter vCenter Server can be compatible with its previous versions while it is not valid vice versa For the details of ESX vCenter Server and vSphere Client version compatibility refer to the vSphere Compatibility Matrices available at http www VMware com Control interface Introduction to VMware Infrastructure vSphere Connecting EMC storage with ESX ESXi Server To add or remove EMC storage devices to or from an ESX ESXi Server complete the following steps 1 Modify the EMC storage array configuration appropriately using storage management tools 2 Applicable storage configurations include LUN masking Creation and assignment of LUNs and mataLUNs to the Fibre Channel ports that used by VMware ESX ESXi Server Storage management tools include GUlI based management interface for EMC VNX
50. reduces the time and effort required to provision additional storage and avoids provisioning storage that may not be needed This section provides common terminology and definitions for Symmetrix and thin provisioning A logical unit of storage where physical space allocated on the storage system may be less than the user capacity seen by the host server A group of disk drives used specifically by Thin LUNs There may be 0 or more Thin Pools on a system Disks may be a member of no more than one Thin Pool Disks in a Thin Pool cannot also be in a RAID Group Also referred to as Reported Capacity This is the size of the Thin LUN as it appears to the host This term also applies to traditional LUNs where Allocated Capacity equals User Capacity Pool Capacity measured as Raw Capacity less Overhead RAID overhead and mapping overhead 186 EMC Host Connectivity Guide for VMware ESX Server Total User Capacity Allocated Capacity Available Capacity Consumed Capacity Full Threshold Threshold Alert Oversubscribed Thin Pool Thin Pool Subscription Ratio Thin Pool Allocation Ratio The total capacity seen by all hosts using a Thin Pool The amount of actual thin pool space that is allocated for Thin LUNs The amount of thin pool space remaining that can be allocated to Thin LUNs The amount of Thin Pool that has been reserved and or allocated for all the Thin LUNs in the Pool A percentage of pool capacity
51. refer to the EMC VPLEX CLI Guide available on http support EMC com For more information about logging volumes for VPLEX with GeoSynchrony v5 x refer to the EMC VPLEX with GeoSynchrony 5 0 Product Guide located on http support EMC com VMAX Symmetrix VNX series and CLARiiON product documentation and installation procedures for connecting a VMAX Symmetrix VNX series and CLARiiON storage system to a VPLEX instance are available on http support EMC com For VMAX or Symmetrix to VPLEX connections configure the VMAX or Symmetrix Fibre Channel directors FAs as shown in Table 7 on page 132 Note EMC recommends that you download the latest information before installing any server EMC VPLEX as Array Information and Configuration 132 Table 7 Required Symmetrix FA bit settings for connection to VPLEX Set Do not set Optional SPC 2 Compliance SPC2 SCSI 3 Compliance SC3 Enable Point to Point PP Unique Worldwide Name UWN Common Serial Number C Disable Queue Reset on Unit Attention D AS 400 Ports Only AS4 Avoid Reset Broadcast ARB Environment Reports to Host E Soft Reset S Open VMS OVMS Return Busy B Enable Sunapee SCL Sequent Bit SEQ Non Participant N OS 2007 OS compliance Linkspeed Enable Auto Negotiation EAN VCM ACLX a Must be set if VPLEX is sharing VMAX or Symmetrix directors with hosts that require conflicting bit sett
52. remote host Use the rpowermt display command to display the status of paths being used by PowerPath in the remote host PowerPath VE for VMware ESX ESXi 4 x and ESXi 5 x 179 Multipathing in VMware ESX ESXi Server Claim rules Both NMP and PowerPath VE can be loaded on the same ESX host and will manage storage visible to it but both cannot manage the same storage device at same time Claim rules are used to assign storage devices to either NMP or PowerPath VE By default PowerPath VE claims all supported devices This can be changed by changing the claim rules 180 EMC Host Connectivity Guide for VMware ESX Server A Migration Considerations Data migrations from one array to another can be done using SAN based functionality using Open Replicator EMC SAN Copy EMC MirrorView EMC SRDF etc or using host based functionality This appendix provides pointers for a seamless VMware ESX Server host based array data migration EDX QU Scion na REI nme DIRIGE HR S nines 182 e ES casi assesses sects tine tedaconep dicm idi o nee t Ret bU de 183 ESA JESA AKINI EDXI 5 X Liceat err ra ee rris eps 184 Migration Considerations 181 Migration Considerations ESX 3 0 x 1 2 For host based migration for the ESX 3 0 x use the vmkfstools command To perform the migration use the following steps Present the source and target disk s to the ESX Server Create VMFS on the
53. server host to a storage group After the host is registered it can be added to a Storage Group so that it can be allocated devices and can access those devices 1 Right click on the array and select the option Create Storage Group Provide it with a name such as the name or IP address of the host or a nickname 2 After the Storage Group is created then LUNs and the host may be added to the Storage Group Select the newly created Storage Group from the list and right click on it 3 Select the Properties option and select LUNs to add to the group and select the host to own the group ESX host in the VNX series and CLARIION environment 109 Array Information and Configuration CF245101700059 cswin2008 114 Storage Group Properties General LUNs Hosts Show Hosts Not connected v Select Hosts Filter For Available Hosts Hosts to be Connected Name IP Address OS Type Name IP Address os Type i5 100000051EAF4190 Unknown Un Fibre B elab114 A2 192 168 2 114 Unk Fibre 50 06 04 82 D5 Unknown Un Fibre sglhpi23 10 103 118 Un Fibre a dris Un Fibre 50 00 09 72 08 Unknown Un Fibre E w1N116168 10 103 116 Un Fibre IB sGH28H1 10 103 117 Un Fibre Lu I 2100000E1E04D1 Unknown Un Fibre 20 00 00 05 1E Unknown Un Fibre IB 50 00 09 72 08 Unknown Un Fibre 1 50 06 01 60 88 Unknown Un Fibre elcsaix131 10 108 181 A
54. space is allocated and zeroed on virtual machine provisioning time rather than scattered with each I O sent by the virtual machine to the disk when Thick Provision Lazy Zeroed format is used Thin provisioning is managed in the XtremIO Storage Array rather than in the ESX host when Thin Provision format is used To format a virtual machine using Thick Provision Eager Zeroed complete the following steps 1 From vSphere Client launch the Create New Virtual Machine wizard 2 Proceed using the wizard up to the Create a Disk screen EMC XtremlO Array Information and Configuration 3 Inthe Create a Disk screen select Thick Provisioning Eager Zeroed to format the virtual machine s virtual disk Create New Virtual Machine LS LE XI Create Disk Virtual Machine Version 8 Spedfy the virtual disk size and provisioning policy Dear ate pcuowemi Name and Locaton ztorgae i Available space GB 1788 0 Guest Operating System Network c Virtual diok size wa ce C Thick Provision Lazy Zeroed Thick Provision Eager Zeroed Thn Provison 4 Proceed using the wizard to complete creating the virtual machine Space reclamation Starting with vSphere 5 1 VAAI enables ESX to cause a storage array to reclaim deleted storage that was used by a VM This is most beneficial when VMs are frequently deleted such as VDI XtremIO Storage Array fully supports the space reclamation feature To reclaim stora
55. target disk s and assign an appropriate label Use VC to create VMFS since it automatically aligns VMFS volumes Create directories on the target VMFS to match the source VMFS Copy the configuration files from the source VMFS to the target VMES Power off the virtual machines Copy the virtual disks using the vmkfstools command Remove access to the source disk s Rescan the SAN fabric Un register virtual machines from the VC Delete the source VMFS information from the VC database Re label the target VMFS to the original source VMFS label name Re register and power on the virtual machines 182 EMC Host Connectivity Guide for VMware ESX Server Migration Considerations ESX 3 5 For host based migration for the ESX 3 5 use storage VMotion Using Storage VMotion you can migrate a virtual machine and its disk files from one datastore to another while the virtual machine is running You can choose to place the virtual machine and all its disks in a single location or select separate locations for the virtual machine configuration file and each virtual disk The virtual machine does not change execution host during a migration with Storage VMotion For details see the ESX 3 5 Basic System Administration manual available at http www VMware com ESX 3 5 183 Migration Considerations ESX ESXi 4 x and ESXi 5 x Storage VMotion is also used for host based migration in ESX ESXi 4 x an
56. the thin devices are bound to a thin pool Note The configuration of thin metadevice is concatenated Striping of thin metadevice will not create performance improvement as strippng is already done on the data devices in a thin pool 218 EMC Host Connectivity Guide for VMware ESX Server Unbind thin device TDEV and draining data devices TDAT symdev sid lt sid gt Prior to unbinding a TDEV the status has to be set as Not Ready using the symdev command range lt TDEVId gt lt TDEVId gt not_ready nop The TDEV may be unbound from the pool using the symconfigure command symconfigure sid lt sid gt cmd unbind tdev lt TDEVId gt lt TDEVId gt from pool lt poolName gt commit Further information nop After unbinding the TDEV data devices may also be disabled This results in data device draining where the allocated tracks are distributed to the remaining TDATs For more in depth discussions on Symmetrix Virtual Provisioning refer to the following white papers available on http support EMC com e Implementing EMC Symmetrix Virtual Provisioning with VMware vSphere Implementing Virtual Provisioning on EMC Symmetrix with VMware Infrastructure Virtual Provisioning with VMAX or Symmetrix Virtual Provisioning Virtual Provisioning Implementation considerations When implementing Virtual Provisioning it is important that realistic utilization objec
57. to continue The successful message displays once the expansion succeeds r Message Expand Storage Pool 4 Refresh the display pane The pool space is increased and the critical error message disappears from system dashboard Pools PPAG Ve RAID Type AF Name State RAID Type Drive Type User Capacity free Capacity Allcabed GH vConvemed Subscribed G8 SwSubscribed o The Pos 0 Off Reedy RAIDS SATA LI 1830 112 1998 09 232 014 Virtual Provisioning with VNX series and CLARIION 207 Virtual Provisioning Main components Figure 45 Thin device Devices PI Virtual Provisioning with VMAX or Symmetrix This section provides the following information e Main components on page 208 e Management tools on page 209 e Virtual Provisioning on EMC VMAX or Symmetrix on page 210 Symmetrix Virtual Provisioning consists of three main components each explained further in this section Thin device on page 208 e Data device on page 209 Thin pool on page 209 Figure 45 shows an example Thin Virtual Provisioning on a VMAX or Symmetrix system example Symmetrix Virtual Provisioning creates a new category of Symmetrix devices called a thin device TDEV A TDEV consumes no physical disk space However it uses 143 KB of cache and an additional 8 KB of cache for every GB of reported device size 208 EMC Host Connectivity Guide
58. to the VPLEX Procedure Generator available on http support EMC com to verify the initiator settings for storage arrays when configuring the arrays for use with VPLEX Host connectivity For the most up to date information on qualified switches hosts host bus adapters and software refer to the always consult the EMC Support Matrix ESM available through E Lab Interoperability Navigator ELN at http elabnavigator EMC com under the PDFs and Guides tab or contact your EMC Customer Representative The latest EMC approved HBA drivers and software are available for download at the following websites http www emulex com http www QLogic com http www brocade com The EMC HBA installation and configurations guides are available at the EMC specific download pages of these websites Note Direct connect from a host bus adapter to a VPLEX engine is not supported Exporting virtual volumes to hosts A virtual volume can be added to more than one storage view AII hosts included in the storage view will be able to access the virtual volume The virtual volumes created on a device or distributed device are not visible to hosts or initiators until you add them to a storage view For failover purposes two or more front end VPLEX ports can be grouped together to export the same volumes A volume is exported to an initiator as a LUN on one or more front end port WWNs Typically initiators are grouped into in
59. two paths each to the array totals four paths and if using single initiator single target zoning there are four zones Fibre Channel Connectivity Connectivity Note It is recommended to choose rule of 17 for VMAX or Symmetrix connectivity choose two FAs that add up to 17 Once zoning is completed use the Symmetrix Management Console SMC to verify that the initiators are visible on the console SMC is a tool that is used to manage the VMAX or Symmetrix It can be used to map and mask the VMAX or Symmetrix devices to the initiators Figure 3 shows the screen used to verify the initiators are visible When the initiator is chosen on the pane on the left hand side of this window the details of the initiator appear on the right You must ensure that the parameter On Fabric shows Yes This confirms that the zoning was successful Prepares 1r races o Demetr Arey Lgox xo ace Sen Peseeritiy bmm Mani oro Sus Adapters arm ann ce PROPRIA BO ieat metta tee fa Drecty Pot 9 rens 20 020080 we for n ORS eI FA 14A vaso Alen 28 QD C24 SADT OF OF ITD o MOODLE FA BO OMT FAD too PODLA AT TOTARA FA 2A t co POINT e TAMA ue PORN VO FA fA 1 u PERODRIZONORT FA BO p VOODOO 2 F A 2A 1 SEKIEN tbe DA4A P A 2A 1 MORE FA 2A o MORO Fa 8 6 EXEC VO FA BO c NOON nate F A 206 1 NONOUI IO m A D O MIODOOC eO 1 42 FA 288 1 lt DONI FAO p SOMONE A DAO p VE K aaa FA A t
60. update the host information 7 Repeat Step 3 through Step 6 for the every other initiator listed for the VMware ESX host 8 To verify the host has been correctly registered select the Host tab in the Enterprise Storage window iSCSI EN The host will be reported as attached but manually registered as shown in the next figure File View Tools Tasks Window Help PI e tE AIEA Enterprise Storage 1 Filter By All iv P n ewes PA rovision Storage Hosts Monitors 3l SELvMW163 10 32 137 163 Fibre Manually registered Host Agia GELVMW164 10 32 137 164 Fibre Manually registered SGELYMW165 10 32 137 165 Fibre Manually registered Host Ag SGELVMWI66 10 32 137 168 Fibre isC51 Manually registered SaELVMW169 10 32 137 189 iSCSI Manually registered Host Aq MW Fibre Manual VIT MSH 137 173 F Fu SGELYMW174 10 32 137 174 Fibre Manually registered 2j SGELYMW175 10 32 137 175 Fibre Manually registered 3l SaELvMWL76 10 32 137 176 Fibre iSCSI Manually registered SGELVMWI77 10 32 137 177 Fibre Manually registered Host Aq 3l SaELvMWILBL localdoemain 10 32 137 181 Fibre Manually register SGELVMWI87 10 32 137 187 Fibre Manually registered SGELvVMWLS8 10 32 137 188 SCI Manually registered zl SGELWIN233 10 32 136 233 Fibre Jj SGELWIN234 elabsg el 10 32 136 234 Fibre M mbm od Din 25 128 O44 Cthens Maniis vanickaendi Unck Anne m
61. var log messages do not show any error messages c Logs on VMs show I O errors The following is an example of the errors in the var log messages file in a Red Hat RHEL 5 VM Log snippet Jun 22 04 07 50 VP rhel5 1si kernel sd 0 0 1 0 SCSI error return code 0x080 00002 Jun 22 04 07 50 VP rhel5 lsi kernel sdb Current sense key Data Protect Jun 22 04 07 50 VP rhel5 1si kernel ASC 0x27 ASCQ 0x7 Jun 22 04 07 50 VP rhel5 lsi kernel Jun 22 04 07 50 VP rhel5 lsi kernel end request I O error dev sdb sector 187 8153895 Jun 22 04 07 50 VP rhel5 1si kernel Buffer I O error on device sdb1 logical block 234769229 Jun 22 04 07 50 VP rhel5 lsi kernel lost page write due to I O error on sdbl Jun 22 04 07 54 VP rhel5 lsi kernel sd 0 0 1 0 SCSI error return code 0x08000002 Jun 22 04 07 54 VP rhel5 lsi kernel sdb Current sense key Data Protect Jun 22 04 07 54 VP rhel5 1si kernel ASC 0x27 ASCQ 0x7 Jun 22 04 07 54 VP rhel5 lsi kernel Jun 22 04 07 54 VP rhel5 lsi kernel end request I O error dev sdb sector 1878481959 Jun 22 04 07 54 VP rhel5 lsi kernel sd 0 0 1 0 SCSI error return code 0x08000002 Jun 22 04 07 54 VP rhel5 lsi kernel sdb Current sense key Data Protect Virtual Provisioning with VNX series and CLARION E Virtual Provisioning Jun 22 04 07 54 VP rhel5 lsi kernel ASC 0x27 ASCQ 0x7 Jun 22 04 07 54 VP rhel5 lsi kernel 1878490159 Expanding the Storage Pool interface is st
62. 0 Cisco UCS ENIC esxcli system module parameters set p fnic max qdepth 128 m fnic Note The command examples refer to the QLogic qla2xxx and Emulex Ipfc820 modules Use an appropriate module name based on the output of the previous step 4 Reboot the ESX host 5 Open an SSH session to the host as root 6 To confirm that queue depth adjustment is applied use the following command esxcli system module parameters list m driver EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration Note When using the command replace lt driver gt with the module name as received in the output of step 2 for example lpfc820 or qla2xxx The following is an example of a host with a QLogic HBA and queue depth set to 256 esxcli system module parameters list m gla2xxx grep ql2xmaxqdepth ql2xmaxadepth int 256 Maximum queue depth to report for target devices vSphere 4 1 To set HBA queue depth on a host running vSphere 4 1 complete the following steps 1 Open an SSH session to the host as root 2 To verify which HBA module is currently loaded use one of the following commands HBA Vendor Command OLogic vmkload mod 1 grep ql Emulex vmkload mod l grep Ipfc The following is an example of a host with a OLogic HBA vmkload mod I grep ql all interface qla2 xxx 2 1144 3 To adjust the HBA queue depth use one of the followi
63. 0000003B201E0B A0 FC03 P000000003B201E0B A0 FCO1 v fpross Need help on this step e Fe storage Views P0000000038201E08 A0 FCO2 X Initiators Select one or more ports If P0000000038201E0B A1 FCOO you selected an initiator on pots the previous step you can li P000000003B201E0B A1 FCO1 v Virtualized Storage click Add Ports for Selected P0000000038201E0B A1 FC02 Add Initiators to automatically add Virtual Voh z P000000003B201E0B A1 FCO3 ES the ports associated with the al CT ENNI D Devices selected initiators P0000000038201F80 A0 FCOO Ej Extents P0000000038201F80 A0 FCO1 Physical Sh You must select at least one SEES emm Y B Physical Storage port before you can proceed Bi P000000003B201F80 A0 FCO2 HT Storage Volumes to the next step or click Finish P0000000038201F80 A0 FCO3 Remove Al _J Storage Arrays to create the storage view 4i P000000003B201F80 A1 FCOO T without adding all components P0000000038201F80 A1 FCO1 lect a storage view v C3Hosts x P0000000038201F80 A1 FC02 fz Th Storage Views Click Next to proceed to the TUE peus next step or Back to go back P0000000038201F80 A1 FCO3 ae to a previous step P0000000038301E0B8 B0 FC00 ce P0000000038301E08 80 FCO1 v Ex Virtualized Storage x sq virtual Volumes D Show Property Links Show Property Links Devices Bak Next gt Finish Cancel Help jExtents LS mS k p ux 2 ol Y E Physical Storage a Storage Volumes Stor
64. 0x2a to device naa 60000970000194900098533031314345 failed H 0x0 D 0x2 P 0x0 Valid sense data 0x4 0x44 0x0 Adding new data In the event when a thin pool becomes full write to TDEVS fails To devices IDAT ona remediate the problem more data devices should be added to the filled pool Write thin pool After adding the TDATs new writes will only striped balancing across the newly added TDAT This may lead to a performance impact on the thin pool especially if the number of TDAT added is small Therefore it is advisable to perform a write balance operation on the thin pool on adding new TDAT to a thin pool A write balance operation will re distribute written extents from filled TDATs to the emptier TDATs The symcli command to perform write balance is as follows symconfigure sid sid cmd start balancing on pool poolName type thin commit nop v During this operation the pool state will change to from Enabled to Balancing Write balance operation will run on all data devices and iterate until the differential threshold among all TDAT is within 10 Create thin Thin metadevices are created using the same syntax as metadevices metadevices for regular volumes The symcli command to perform thin metadevice creation is as follows symconfigure sid sid cmd form meta from dev meta head config concatenated add dev LUN to meta meta head commit nop v Thin metadevices have to be created before
65. 117172 i o Figure 32 8 iocraunmceu 211Faulted 250 Faulted 212 Faulted 251 Faulted 213 Faulted 252 Faulted 60 Faulted 40 Faulted 35 Faulted 36 Faulted 37 Faulted 38 Faulted 39 Faulted 319 Ready 300 Faulted 199 Faulted 175 Faulted Report on LUNs Array Information and Configuration After the trespass has been completed only LUNs will be reported on SP B as is demonstrated in Figure 32 UUUYMIMLLDL00 lldd DUUDU1DU2GdUZ 5 000 4 000 VMM116168 naa 600601603aa02 5 000 4 000 VMM116168 naa 6006016033a02 5 000 4 000 VMM116168 naa 600601603aa02 5 000 4 000 VMW117170 FCOE VMW117178 4 000 4 000 4 000 SGH28H11 PwrP vmhba2 C0 T2 11 4 000 VMW117170 FCOE VMW117178 4 000 VMW117170 FCOE VMW117178 10 000 WIN116174 elabcd com WIN1161 localhost vmhba0 c0 30 000 VMW117167 20 000 vmw117168 vmhba2 C0 T2 0 30 000 VMW117172 ESX host in the VNX series and CLARIION environment 123 Array Information and Configuration EMC VPLEX VPLEX documentation For detailed information about EMC VPLEX refer to the documentation available at https support emc com This section includes the following information 9 9 9 9 9 9 9 9 9 9 9 VPLEX documentation on page 124 Prerequisites on page 125 Provisioning and exporting storage on page 125 Storage volumes on page 128 System volumes on page 130 Required storage system set
66. 2 003 8 di es Luv 55 95 SaL vetw 167 1 000 GB 2 000 GB Corsumec ersbecrbed f dig e LU S6 56 Sf vw 167 1 000 GB 2 000 GB Corsumec ig he Lure 57 57 Sat vete 1 000 GB 2 000 G Consumed Ui mee immo Fas dh omma asm t ale Dien he eed Capacity 2 Add the LUN created into storage group 3 Issue the esxcfg rescan vmhba1 2 command to rescan the HBA 4 Issue the esxcfg mpath 1 command to see the LUNs attached to ESX server and to ensure that the LUN is added 5 In Virtual Center populate the thin LUN to one Linux VM as an path through RDM LUN 6 Inthe console mode of the Linux VM attach the thin LUN partition it and make file system on it Issue the following commands echo sys class scsi host host0 scan disk dev sdj1 assume the LUN is sdj mkfis ext2 dev sdj1 7 Mount the LUN in VM Issue the following commands mkdir mnt test mount dev sdj1 mnt test Execution During the process we need to monitor var log messages and var log vmkernel on ESX Server and var log messages on Linux VM 196 EMC Host Connectivity Guide for VMware ESX Server To do this complete the following steps 1 Adda LUN which is over provisioned to the host Add a 300 GB thin LUN which is larger than the size of the thin pool 264 GB to the ESX 3 5 U2 host asa RDM LUN Thin Pool size is 264 GB and free space is 22 GB The system and log alert disk out of spac
67. 23 VPLEX provisioning and exporting storage process sss 127 Create storage View isceotecieituemeimei pee etate e RU Feeder eerie 135 Register 1initiators 4 scien periere ees arse sdri riasa 136 Add ports to storage VIEW csse eret tensile ri e repe peri pei reiecit 137 Add virtual volumes to storage view see 138 VMkernel pluggable storage architecture esses 176 Traditional storage provisioning sss 188 Virtual thin provisioning 5 ertt rtr rar renis 188 Pool Full Threshold ratto tem tee ben enne 189 Reserved capagity rerit ie dene bte ha edere eere reins 190 LUN threshold and allocation limit sssssssssssee 191 R served capagity as hne tet tiere reb ert t ra Ree se sa 192 Virtual Provisioning on a VMAX or Symmetrix system example 208 EMC Host Connectivity Guide for VMware ESX Server DoF WNFR oo N Tables Title Page EMC models and minimum operating system requirements 12 Useful utilities and functions for VMware ESX Server 3 x 21 Useful utilities and functions for VMware ESX Server 4 x oo 21 Useful utilities and functions for VMware ESXi 5 sss 22 VMAX or Symmetrix SCSI 3 addressing modes 1 96 VMAX or Symmetrix director bit setting for ESX Server environmenls essent riter ete e rend i e t tee 98 Required Symmetrix FA bit settings for connection to V
68. 24 50 localhost syslogd 6 X X X X X X X X X X X X X X X X X X X X X X X FF X FILE var log boot log Oct 11 01 24 50 localhost syslogd logmsg syslog info lt 46 gt flags 4 from localhost msg syslogd 1 4 1 restart Oct 11 01 24 50 localhost syslogd Called fprintlog logging to FILE var log messages Oct 11 01 24 50 localhost syslogd syslogd restarted Oct 11 01 24 50 localhost syslogd Debugging disabled SIGUSR1 to turn on debugging Oct 11 01 24 50 localhost syslog syslogd shutdown succeeded Oct 11 01 24 50 localhost syslogd Allocated parts table for 1024 file descriptors Oct 11 01 24 50 localhost syslogd Starting Oct 11 01 24 50 localhost syslogd Called init Oct 11 01 24 50 localhost syslogd Called allocate log nlogs 1 Oct 11 01 25 09 localhost sshd pam unix 22777 authentication failure logname uid 0 euid 0 tty ssh ruser rhost sgtgchoiellc corp emc com user root Oct 11 01 25 15 localhost sshd pam_unix 22779 session opened for user root by root uid 0 Oct 11 01 25 28 localhost exiting on signal 2 Oct 11 01 25 36 localhost syslogd 1 4 1 restart Oct 11 01 25 36 localhost syslogd Allocated parts table for 1024 file descriptors Oct 11 01 25 36 localhost syslogd Starting Oct 11 01 25 36 localhost syslogd Called init Oct 11 01 25 36 localhost syslogd Called allocate_log nlogs 1 Oct 11 01 25 36 localhost syslogd cfline info mail none authpriv none
69. 3 Update or add drivers on existing ESX installations using esxupdate for ESX only on page 84 Update or add drivers on existing ESX and ESXi installations using vihostupdate for both ESX and ESXi on page 84 Install drivers for devices as part of a new ESX installation for ESX only Note This procedure has changed since the 3 5 version of driver CD In ESX 3 5 the driver CD can be used as the boot CD For ESX 4 0 and later you will need the ESX installation DVD to begin For ESX v3 5 Use this CD as a boot CD only when the new driver must enable the target device on which ESX will be installed This procedure can only be used with ESX 3 x 1 Place the driver CD in the CD ROM drive of the host machine 2 Start the host machine 3 When prompted for an upgrade or installation press Enter for graphical mode 4 Choose the language you prefer 5 Select a keyboard type 6 After you are prompted to swap the driver CD with the ESX 3 5 installation CD insert the ESX 3 5 installation CD and continue with ESX installation FCoE initiator configurations For ESX v4 x 1 Place the ESX Installation CD in the CD ROM drive of the host machine Start the host machine Accept the terms of the license agreement Select a keyboard type o e SP N When prompted for Custom Drivers select Yes to install custom drivers 6 The installer prompts you to insert the media containing the custom dri
70. 3 Select Virtual Volumes c 4 Review amp Finish Results Available Registered Initiators Need help on this step Enter a name for the storage view and then select the initiators You can drag and drop items between the Available Registered Initiators list and the Selected Registered Initiators list You can also click Next to proceed to the next step without selecting an initiator Click the Show Property links to show or hide the properties link for the items in the list above Initiator Name AIX73_FCS0_c9879d02 AIX73_FCS1_c9879d03 AIXB5_2cc8 AIX85 FCS2 8922 AIXBS fcsi 2009 AIX85 fcs3 8923 amp AIX86 FCSO amp AIX86 FCS2 AIX 106 c974c2b6 AIX 106 c974c2b7 amp East 34 A0 FCOD amp Sun I82bb047 hba Sun I82bb047 hbai amp Sun I82bb136 hba amp Sun I82bb136 hbai amp Sun I82bb180 hba Show Property Links Storage View Name test 1 Selected Registered Initiators Initiator Name test_brocade_1 Add Ada Remove Remove Al how Pri lt Back Next Finish Cancel Help a plected Storage View elect a storage view Ch hp ux 1123 O Selected Create ok 18 items User service Figure 34 Create storage view EMC VPLEX 135 Array Information and Configuration 136 2 Registering initiators as shown in Figure 35 179 195 smsflex VPl
71. 36 115 Logged In gt fb SADL CX4 40 1 04 12 Cats Storage Pool Operations e Physical Create Storage Group amp SPA Faults ie SPB A xpand Disk Summary Span P LUN Folders i Software Operations ie Storage Pools w g Storage Groups Connectivity Status Port Management Migration Summary Update Now Properties EnablejDisable Classic CLI iSCSI 3 Select the initiator iqn from the list in the Connectivity Status dialog box 4 Click Register or click New if the initiator record cannot be found The Create Initiator Record dialog box displays when New is clicked as shown in the next figure EMC Host Connectivity Guide for VMware ESX Server A similar dialog box appears when Register is clicked except the name the dialog box is Register Initiator Record Create Initiator Record Initiator Information Initiator Mame iqn Z O0 04 com glogic sgelemae172 hbai SP port A 6 SCSD v Initiator Type CLARION Open v Unit Serial Number Array i ArrayCommPath Failover Made 1 ivl HE4 Information vendor Model p Host Information New Hast Existing Hast Host Mame sgelvmi72 IP Address 10 32 137 172 Hasty SGELLINES 5 Update the Host information a Set Initiator Type to CLARiiON Open b Set the Failover Mode tol c Check the ArrayCommPath box d Select the target SP port to connect to 6 Click OK to
72. A on page 28 iSCSI card on page 29 Converged Network Adapter CNA on page 30 The Fibre Channel HBA driver functions as a device driver layer below the standard VMware SCSI adapter driver The Fibre Channel interface is therefore transparent to the VMware disk administration system Fibre Channel Rate 2 GB s 4 GB s 8 GB s I O Bus PCI PCI X 1 0 2 0 PCI E 1 0a 1 1 2 0 FC Ports Single Dual Quad There are legacy cards with speed of 1 GB s however these cards were used with earlier versions of ESX It is recommended to use 4 GB or 8 GB HBA cards starting at ESX 3 x and later to achieve better performance To check QLogic HBA parameters issue the following command proc scsi qlaxxx N EMC Host Connectivity Guide for VMware ESX Server Installation of ESX ESXi Server For Emulex HBAs proc scsi lpfcxxx N e For Brocade HBAs proc scsi bfaxxx N where N is the sequential value of each QLogic HBA installed in the system beginning with the number after the last host adapter number entry in the file The parameters contain useful information of the initiator the major information including e HBA Model Number e Driver Version e Firmware Version e BIOS Version Current Speed Link Down Timeout Value Port Down Retry Times e WWPN WWNN of initiator e WWPN WWNN of the target being connected Normally the timeout value for link down events is 30 seconds To cha
73. A port In this particular example two switches are used Using only one switch is supported but such a configuration would lack redundancy Preferably a minimum of two switches should be used to add another level of redundancy Alternatively for additional redundancy two separate fabrics could be utilized Fibre Channel d0 dl cO cl d0 dl c0 cl FA 15D Symm 1 FA 16C Symm 1 Figure 1 One host two switches and one VMAX or Symmetrix array Example 2 In this example as shown in Figure 2 on page 37 one host with two HBAs is attached using a two switch fabric to four VMAX or Symmetrix arrays FA ports In this configuration the zones are created with one HBA and one FA port That is HBAOis zoned to one 15D0 port on each of the four arrays e HBAL1 is zoned to one 16C1 port on each of the four arrays EMC Host Connectivity Guide for VMware ESX Server Figure 2 d di cO cl TA di cl One host two switches and four VMAX or Symmetrix arrays Note All qualified HBAs are listed in the EMC Support Matrix When assigning VMAX or Symmetrix LUNs to a VMware ESX Server host the LUNs should be assigned to the host across both FAs since the VMAX or Symmetrix is an active active array The VMAX or Symmetrix director ports are zoned to the HBAs on the switch Devices can be added to the host using either VMAX or Symmetrix Management Console SMC or Solutions Enabler Example for 4 x Two HBAs with
74. BA as the BR 1020 simply encapsulates Fibre Channel traffic within ethernet frames Updating the Brocade CNA firmware and boot BIOS To update the Brocade driver and firmware 1 Download the driver package from www brocade com cna 2 Follow the instructions for Installing the Brocade 2 0 CNA driver on page 86 The esxupdate command can be used to install update the firmware Note The ESX server must be rebooted after installation FCoE initiator configurations Cisco Unified Computing System with FCoE The Cisco Unified Computing System UCS is a next generation data center platform that unites compute network storage access and virtualization into a single system configuration As shown in Figure 16 on page 91 configurations consist of a familiar chassis and blade server combination that works with Cisco s Fabric Interconnect switches to attach to NPIV enabled fabrics This allows for a centralized solution combining high speed server blades designed for virtualization FCoE connectivity and centralized management Fibre Channel ports on Fabric Interconnect switches must be configured as NP ports which requires the connected Fabric switch to be NPIV capable Refer to the latest EMC Support Matrix for currently supported switch configurations In each server blade an Emulex or QLogic based converged network adapter CNA mezzanine board is used to provide Ethernet and Fibre Channel connectivity for that
75. DU has completed return to the main screen and right click on the array Select the Storage System Properties option and then select Software Select the FLARE Operating Environment so it is highlighted and select the Commit button The Commit button will activate the new FLARE on the array Manual trespass on VNX series and CLARiiON systems to recover the original path In some cases when using the MRU or Round Robin Native Multipath failover policies it may be desirable to trespass the LUNs back to their original path This may be performed manually using the Unisphere Navisphere Manager The instructions for a manual LUN trespass are next ESX host in the VNX series and CLARIION environment 117 Array Information and Configuration Note When ESX Server is booting from SAN in some situations the VNX series and CLARION LUN that the ESX Server in installed on trespasses For example if your ESX Server is booting from the VNX series and CLARiiON systems that an NDU is being performed on the NDU may cause the ESX Server boot LUN to trespass and not auto restore if it is using Native Multipath software with MRU or Round Robin failover policy In this case if you happen to reboot the ESX Server after the NDU it cannot find the disk to boot from This is expected behavior because of the way the boot LUN setup is done in the HBA BIOS The fix is to use the Unisphere Navisphere GUI to trespass the boot LUN back to the default own
76. During the installation procedure it is recommended but not required that only one LUN be assigned to the Storage Group for ease of use After the installation has completed additional LUNs can be added to the Storage Group For ESX 3 x and later the installation is based on a modified Red Hat version 2 4 21 or newer kernel Hence the commands bootfromsan or bootfromsan text are not necessary IMPORTANT If VMware loses all paths to the array for a long enough period the disks disappear from the system A hard reboot is required to bring the system back to a usable state For ESX 3 x and later Only SW iSCSI environments are not supported when booting from the SAN Booting from VNX series and CLARION storage systems Eg Boot from SAN 236 EMC Host Connectivity Guide for VMware ESX Server Symbols vSphere Client 23 A adapter card installation 31 addressing modes 96 ALUA Asymmetric Logical Unit Access 106 ALUA failover mode behavior 106 Asymmetric Logical Unit Access ALUA 106 B boot disk 232 Booting 232 Booting from Symmetrix storage arrays 232 Booting from VNX CLARiiON storage arrays 234 Brocade FCoE CNAs configuring 85 C CEE converged enhanced ethernet chip 75 82 Cisco Unified Computing System UCS 90 CLARiiON connectivity 40 Command Descriptor Blocks CDB 95 connectivity Symmetrix 34 VNX CLARiiON 40 converged enhanced ethernet CEE chip 75 82 Converged Network Ad
77. ESX Server C Virtual Machine File System This appendix provides information about the Virtual Machine File System VMFS MIMIES datastereas cede e ERS 222 Voluiealigriment araaisonaiibiem qireimibein i ea iren 223 S Version histr yas equitem dS HORE IER EDU ORIATUR 224 e Size limits ssonensecuneoeiuensiia ii ee RR nese 225 Virtual Machine File System EZN Virtual Machine File System VMFS datastore VMware VMFS is a high performance cluster file system for ESX Server virtual machines that allows multiple ESX Servers to access the same virtual machine storage concurrently Each virtual machine is encapsulated in a small set of files which are stored on VMFS by default on physical SCSI disks and partitions This set of files mainly includes the following vmx the virtual machine configuration file nvram the virtual machine BIOS vmdk the virtual machine hard disk 9 9 vmsd the dictionary file for snapshots and the associated vmdk VMFS efficiently stores the entire virtual machine state in a central location to simplify virtual machine provisioning and administration 222 EMC Host Connectivity Guide for VMware ESX Server Virtual Machine File System Volume alignment VMware file system volumes created using VI vSphere Client are automatically aligned on 64 KB boundaries Detailed description of track and sector alignments in x86 environments can be referred to in t
78. ESXi host vmware cmd Command to provide an interface to perform operations on a virtual machine including to retrieve information about the power state register and unregister the virtual machine set configuration variables and manage snapshots resxtop Command to retrieve performance statistics Command is included in vSphere command line interface CLI and is part of the vSphere Management Assistant vMA which is an equivalent to esxtop that runs only inside an ESX service console EMC Host Connectivity Guide for VMware ESX Server Control interface VMware Web UI VMware vSphere Client VMware vCenter Server Introduction to VMware Infrastructure vSphere This section briefly describes the following VMware Web UI on page 23 VMware vSphere Client on page 23 VMware vCenter Server on page 23 VMware Web UI is a free option allowing administrators to monitor or manage the server remotely through a web based graphical interface by simply typing the host IP in an internet browser window and logging on using an administrator password One of the main disadvantages of VMware Web UI compared with VI Client is that only one ESX server can be accessed at a time vSphere Client is a graphical user interface that allows the remote connection from administrators and users of different roles to the VirtualCenter Management Server or individual ESX Server installations from any Windows
79. Failback Yes iSCSI P Connectivity 2 Select the NIC Teaming tab and check the box Override vSwitch failover order as shown in the next figure General IP Settings Security Traffic Shaping NIC Teaming Policy Exceptions Load Balancing Route based on the originating virtual port ID v Network Failover Detection ri Notify Switches Fs Faiback rs Failover Order v Override vSwitch falover order rire lel art ere eee In a falover stuation standby adapters activate in the order specified below Name Speed Networks Levee x c ll 3 Setone active adapter for each VMkernel and move the rest of the adapters to Unused Adapters Repeat this step for every VMkernel on the vSwitch Note There should be the same number of VMkernel and Network adapters EN EMC Host Connectivity Guide for VMware ESX Server 4 Activate host based multipathing by connecting each VMkernel ports e g vmk0 and vmk1 to the iSCSI initiator from the service console by issuing the following commands For vSphere 4 esxcli swiscsi nic add n vmk0 d vmhba32 esxcli swiscsi nic add n vmk1 d vmhba32 esxcli swiscsi nic list d vmhba32 For vSphere 5 esxcli iscsi networkportal add n vmk0 A vmhba32 esxcli iscsi networkportal add n vmk1 A vmhba32 esxcli iscsi networkportal list A vmhba32 Both vmk0 and vmk1 should be listed and there
80. IX Fibre E sgh28h13 10 103 117 Un Fibre 18 50 00 09 72 08 Unknown Un Fibre 20 00 00 05 1E Unknown Un Fibre 1 21010018322a577F Unknown Un Fibre i Refresh OK Apply Cancel m Figure 22 Storage Group Properties window 4 The VMware host will be listed under the Hosts To be Connected and the OS will report itself as Unknown if the host was manually registered on the array This is expected behavior Performing a VNX series and CLARIION NDU with VMware ESX server hosts The VNX series block and CLARiiON FLARE or storage system software is installed via a non disruptive upgrade or NDU This process allows users to upgrade the system code level without having to take their hosts offline 110 EMC Host Connectivity Guide for VMware ESX Server Note Before performing a non disruptive upgrade NDU ona VNX series and CLARiiON system attached to an ESX Server using Native Multipathing software keep track of which LUNs are owned by which Storage Processor During an NDU if the Native Multipathing failover policies other than fixed are used with VNX series and CLARiiON systems all LUNs may end up ona single SP after the upgrade Once the upgrade is complete manually trespass each LUN using Unisphere Navisphere Manager or CLI to the Storage Processor that owned the LUN before the NDU process As a best practice place LUNs on their default Storage Processor This ensures a unif
81. LEX ports to the storage view You can provision storage using the GUI or the CLI Refer to the EMC VPLEX Management Console Help or the EMC VPLEX CLI Guide located on http support EMC com for more information Figure 33 on page 127 illustrates the provisioning and exporting process EH EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration Registered initiators V Plex ports A EE Figure 33 VPLEX provisioning and exporting storage process VPLEX with GeoSynchrony v5 x VPLEX allows easy storage provisioning among heterogeneous storage arrays After a storage array LUN volume is encapsulated within VPLEX all of its block level storage is available in a global directory and coherent cache Any front end device that is zoned properly can access the storage blocks Two methods available for provisioning EZ provisioning and Advanced provisioning For more information refer to the EMC VPLEX with GeoSynchrony 5 0 Product Guide located on http support EMC com EMC VPLEX 127 Array Information and Configuration Storage volumes Extents A storage volume is a LUN exported from an array When an array is discovered the storage volumes view shows all exported LUNs on that array You must claim and optionally name these storage volumes before you can use them in a VPLEX cluster Once claimed you can divide a storage volume into multiple extents
82. MP Fixed Round Robin Customers must use discretion while choosing the path policy for their configuration based on the following considerations Fixed The Fixed policy does not provide load balancing by default Users can configure load balancing manually by selecting a different active path for every LUN If load balancing is not configured manually the Fixed path policy will lead to performance issues if I O from all LUNs uses the same path The best practice in the case of Fixed failover policy is to manually configure load balancing Round Robin The Round Robin policy automatically provides load balancing by definition but does not restore paths to default owner SP after disruptions such as SP reboots or resets This can cause issues such as performance degradation due to all paths being on one SD poor latency and potential All Paths Down and failover messages in the VMkernel logs ESX host in the VNX series and CLARIION environment 107 Array Information and Configuration To avoid these issues customers must manually restore paths after a disruption or switch to the Fixed failover policy and manually set up load balancing by distributing paths Boot from SAN BFS There is no failover software available when the system is booting If with ALUA an ESX host loses all optimal paths to the VNX series and CLARiiON systems ALUA avoids any issues that can be caused when boot BIOS attempts to boot fr
83. Navigate to cd mount point offline bundle and locate the lt offline bundle gt zip file 5 Runthe vihostupdate command to install drivers using the offline bundle vihostupdate server lt test machine address gt install bundle lt offline bundle gt zip For more details on the vihostupdate command see the vSphere Command Line Interface Installation and Reference Guide located at http www VMwarte com For ESX 3 5 U2 and later see the Remote Command Line Interface Installation and Reference Guide located at http www VMware com Configuring Brocade FCoE CNAs with VMware ESX Server Installing the CNA This section provides information on configuring Brocade FCoE CNAs with the VMware ESX Server including Installing the CNA on page 85 Installing the Brocade 2 0 CNA driver on page 86 Updating the Brocade CNA firmware and boot BIOS on page 89 To install the CNA 1 Install VMware ESX Server 3 5 update 2 or later on your server For instructions on installing the operating system refer to the relevant documentation at http www VMware com 2 Power off the host system 3 With the host system powered off install the CNA and make the necessary cable connections as instructed in the accompanying CNA documentation The card installs into a single PCI e slot 4 Reapply power and allow the system to boot normally FCoE initiator configurations Installing the Brocade 2 0 CNA d
84. P large array raid 486 vol Nj large array raid 487 vol WD large array raid 488 vol wp large array raid 489 vol wj large array raid 490 vol S large array raid 491 vol WP large array raid 492 vol wj large array raid 493 vol wp large array raid 494 vol s large array raid 495 vol WV large array raid 496 vol wilarge array raid 497 vol Show Property Links SH Does Add All gt emove lt Remove All Selected Unexported Virtual Volumes Virtual Volume Name WP AIX Tigon 264 raid 332 vol WP DD VMware sym094 588 sym264 723 S DD VMware sym094 661 sym264 796 wg HP CX 34E 159 vol wp I2 DMX 320 raid 291 vol s large array raid 461 vol wp large array raid 478 vol wp large array raid 477 vol wj large array raid 476 vol w large array raid 475 vol wp large array raid 474 vol wp large array raid 473 vol wp large array raid 472 vol w large array raid 471 vol Mj large array raid 470 vol wp large array raid 469 vol wp large array raid 468 vol s large array raid 467 vol wi large array raid 466 vol how Pre Bak Next Finish Cancel Help lected Storage View lelect a storage view Th ho ux 1123 0 Selected Create ok 18 tems User service Figure 37 EMC Host Connectivity Guide for VMware ESX Server Front end paths This section contains the following information Viewing the World Wide Name for an HBA port o
85. PLEX 132 Supported hosts and initiator types sse 140 VMESZESX VerSIODS hrec recon teni tvi cedex rod eon Pr Yr e Perdu GE etoeE Sasa 224 EMC Host Connectivity Guide for VMware ESX Server E 10 EMC Host Connectivity Guide for VMware ESX Server Preface As part of an effort to improve and enhance the performance and capabilities of its product line EMC from time to time releases revisions of its hardware and software Therefore some functions described in this document may not be supported by all revisions of the software or hardware currently in use For the most up to date information on product features refer to your product release notes If a product does not function properly or does not function as described in this document please contact your EMC representative This guide describes the features and setup procedures for VMware ESX Server host interfaces to EMC Symmetrix VMAX VNX and CLARiiON storage systems This document is meant to assist in the installation and configuration of VMware ESX Server attached to Symmetrix VMAX VNX and CLARiiON systems Audience This guide is intended for use by storage administrators system programmers or operators who are involved in acquiring managing or operating Symmetrix VMAX VNX and CLARiiON and host devices Readers of this guide are expected to be familiar with the following topics e VMAX Symmetrix VNX and CLARiiON sys
86. SX Servers FCoE adapters represent a method to converge both Fibre Channel and Bthernet traffic over a single physical link to a switch infrastructure that manages both storage SAN and network IP connectivity within a single unit Currently supported FCoE Converged Network Adapter CNA offerings are e Emulex LP21000 and LP21002 QLogic OLE8042 EMC Host Connectivity Guide for VMware ESX Server Adapter installation Install adapter card Installation of ESX ESXi Server Currently the VMware ESX Server versions that support Fibre Channel over Ethernet are ESX Server 3 5 Update 2 and later and ESX 4 0 and later Always refer to the EMC Support Matrix to verify which servers are supported in FCoE configurations with ESX Server Fiber Channel Rate 4 G s I O Rate 10 G s I O Bus PCI E FC Ports Single Dual You may customize the installation according to your server and the amount of memory and hard disk space you have EMC supports all the in box drivers come with the different versions of ESX Server From ESX 3 x and later versions there is no need to manually load the adapter driver into system ESX will detect the hardware and load the driver automatically For driver version information refer to EMC Support Matrix or the VMware Hardware Compatibility Guide When choosing an adapter for your server it is important to know which adapter is compatible with your server s PCI PCI
87. Support Matrix for VMAX 40K VMAX 20K VMAX VMAX 10K Systems with SN xxx987xxxx VMAX 10K Systems with SN xxx959xxxx or VMAXe available through E Lab Interoperability Navigator ELN at http elabnavigator EMC com This section demonstrates how to create a thin device data device and thin pool on VMAX 400K VMAX 200K and VMAX 100K VMAX 40K VMAX 20K VMAX VMAX 10K Systems with SN xxx987xxxx VMAX 10K Systems with SN xxx959xxxx and VMAXe using EMC Solutions Enabler including Setup configuration on page 211 Steps on page 211 Monitoring the thin pool on page 217 Thin pool error code when full on page 217 Adding new data devices TDAT on a filled pool Write balancing on page 218 EMC Host Connectivity Guide for VMware ESX Server Setup configuration In this example the following will be created Steps The following steps summarize the process Each step will be further explained in this section 1 2 3 4 5 Step 1 Create thin devices TDEV 1 symconfigure sid sid GB CYLS emulation FBA symconfigure sid sid GB CYLS emulation FBA symconfigure sid 98 cmd create dev count 2 size 11000 emulation FBA config TDEV commit Execute a symconfigure operation for symmetrix 000194900098 y n v A Configuration Change operation is in progress Please wait Establishing a configuration change session
88. Trespass You will be prompted to confirm that the LUNs should be trespassed as can be seen in Figure 31 on page 122 Select Yes to continue and to trespass the LUNs ESX host in the VNX series and CLARIION environment Array Information and Configuration LUNs Snapshot Mount Points Folders LUNs Y Filter for Usage ALL User LUNs Y Folder alll v Status All Y Name am State User Capacity GB 98 1 vnx180171 209 Faulted Ed 1 Win172168163 182 Faulted 2 VNX180171 211 Faulted ES 2 WIN172168163 250 Faulted 8 3 VNX180171 212 Faulted Ed 3 WIN172166163 251 Faulted 9 4 vNx180171 213 Faulted E 4 W1N172168163 EA confirm Trespass LUN e 5 VNX180171 CAE the following LUNs amp 95 170178_Fcoes BFS 116166 amp 170187 rci BFS 116180 B 17017 rc2 170187 FC3 amp 9 170187 rcoti 170187 FCOE2 E aatest 319 Ready Do you wish to continue BFS_116166 BFS 116180 BFS VMW117167 300 Faulted Ed rs vMw1i17168 199 Faulted amp prs vwwi17172 175 Faulted E nece gorii rar nanamn la Figure 31 Confirm trespass LUNs A EMC Host Connectivity Guide for VMware ESX Server we 1 WaiMNI cCIOO0102 2 UNX180171 E 2 W1N172168163 amp 9 3 vnx180171 Ed 3 w1N172168163 amp 98 4 unx180171 Ed 4 wI1N172168163 s uNx160171 ES 170178 FCOES 170187 FC1 170187 rc2 170187 FC3 170187 FCOE1 amp 8 170187 rcog2 E aatest BFS VMW117167 Ed ers_vmwa117168 amp ers vmw
89. Window Help DE Enterprise Storage 1 Ss Filter By fan 7 Filter For E Storage Hosts Monitors Hosts 20 00 00 E0 66 08 80 48 21 00 00 E0 86 08 60 48 182bi117 Iss emc com 182bi132 Is emc com Manually registered 182b 1 75 15s emc com Host Agent not reachable I82b196 I5s emc com 1825 1 99 Iss emc com Manually registered 182b232 Iss emc com 1825233 Iss emc com 182 234 Iss emc com Host Agent not reachable Figure 10 Host reported as attached but manually registered example Fibre Channel ES Connectivity iSCSI The Internet Small Computer Systems Interface iSCSI protocol enables the transport of SCSI blocks through TCP IP network iSCSI works by encapsulating SCSI commands into TCP packets and sending it over IP network An example is shown in Figure 11 Figure 11 SCSI commands encapsulated by Ethernet headers iSCSI is IP based traffic and therefore can be routed or switched using standard 10 Mb s 100 Mb s 1 G 10 G Ethernet equipment Traditional Ethernet adapters NICs are designed to transfer packetized file level data among PCs servers and storage devices such as NAS appliances In order for NIC to process block level data the data needs to be placed into a TCP IP packet before being sent over the IP network This block level data packet creation and TCP IP processing is done through the use of iSCSI drivers The process known as software SW iSCSI is ex
90. X PCI Express slots Certain adapter models have specific voltage requirements or physical limitations that allow them to only work in specific slots There are three general steps to install a card on the server 1 Open the server cover 2 Insert the adapter with correct direction into PCI slot 3 Connect the Fiber Ethernet Twinax cables for FC HBA iSCSI HBA CNA with one end connecting to adapter port and the other end to the connector on the storage system or a hub switch port Initiator setting and installation Installation of ESX ESXi Server Recommendations The following are a few recommendations for the installation Use static IP addresses Set the hardware clock when prompted Create at least one user account other than root Note emacs samba and NFS are not enabled by default in the Console OS Reboot the system after completing the installation For VMware ESX Server v3 x installations when the system reboots you are prompted with three options boot prompt esx service console debugging mode The default boot image for VMware ESX Server v2 5 x is esx For ESX 4x there are two option VMware ESX 4 x Troubleshooting mode Default boot VMware ESX 4 x HBAs installed in the ESX server do not require changes on parameters in the BIOS Keep the default BIOS and NVRAM settings for HBAs EMC Host Connectivity Guide for VMware ESX Server 3 Connectivity
91. X v4 x on page 198 Virtual Provisioning on VMware ESX v3 x This secton contains the following information Setup on page 195 Preparation on page 195 Execution on page 196 Setup OSand HBAs e ESX Server 3 5 U2 with Emulex LPe11002 E 4Gb HBA e FSX Server 3 0 1 with QLogic 2462 4Gb HBA Switch e Brocade 48000 Cisco e Array e CX4 480 FLARE 04 28 000 5 501 or later Thin Pool 2 RAID6 264 016 GB 242 013 GB Consumed RAID 6 FC Preparation Before you set up Virtual Provisioning on the VNX series and CLARiiON systems complete the following steps 1 Createa thin LUN in the Unisphere Navisphere Manager a sample of which is shown in the following figure with the size larger than thin pool available capacity Virtual Provisioning with VNX series and CLARION BE Virtual Provisioning FCNTROBO 500035 Create LUN Se erage Pool Prapertes LI gu RAIDS 1990 123 GB 121 007 GB Coreumed RAIDS n pos Rorage Pod Type OA Go GT Pod AUN 7 4 000 GB 4 000 GB Consumed ig LUN 44 oe 4 000 GB 4 000 dB Coremas RAID Type PAD 6 Qu Party gh Fat Tolerance A LUN 45 45 SEL WINI h labs ek 4 000 GB 4 000 GB Coren Soap Thin Pod 2 RAIDS a Ag LUN 2 SA SEL WINI labs el 4 000 GB 4 000 GB Core ene s Phe LUN 53 53 SGELVMWIA 10 000 GB 7 000 GS Consume Phu LUN 5 54 SGELVMWI67 1 000 GB 2 000 GB Consumec Avtlaie Capacty 52 063 GB Cormumed Capacty 21
92. a 6006016033302 E s vnx180171 60 Faulted 5 000 170178 FCOE3 40 Faulted 4 000 VMW117170 FCOE VMW117178 amp 170187 FC1 35 Faulted 4 000 B3 170187 FC2 36 Faulted 4 000 B8 170167 rc3 37 Faulted 4 000 SGH28H11 PwrP vmhba2 C0 T2 11 170187 FCOE1 38 Faulted 4 000 VMW117170 FCOE VMW117178 ES 170187 FCOE2 39 Faulted 4 000 VMW117170 FCOE VMW117178 E aatest 319 Ready 10 000 WIN116174 elabcd com WIN1161 E brs 116166 288 Ready 30 000 localhost vmhba0 CO0 T1 0 E BFS 116180 210 Ready 30 000 SGELVMW170 LOCAL BFS VMW117167 300 Faulted 30 000 VMW117167 E Brs vMw117168 199 Faulted 20 000 vmw117168 vmhba2 C0 T2 0 BFS VMW117172 175 Faulted 30 000 VMW117172 E BFS WIN116163 232 Ready 30 000 WIN116163 elabcd com e BFS_WIN116165 169 Faulted 40 000 WIN116165 elabcd com v 0 Selected Create Create Snap Delete Properties Add to Storage Group Filtered 392 of 400 Last Refreshed 2013 05 14 13 00 42 Figure 28 LUNs example 5 To view the Properties of the LUNs right click on the LUN A window will provide the specific properties of the LUN ESX host in the VNX series and CLARIION environment no Array Information and Configuration For example if you were to select LUN 196 that is allocated to the host then you would see information as shown in Figure 29 3G CF2A5101700059 1 VNX180171 LUN Properties General Cache Prefetch Statistics Hosts Disks Folders Identity
93. able is stopped or paused It is recommended that you put the host in maintenance mode before performing firmware upgrade 3 Run the following command hbacmd Download WWPN MAC lt FileName gt where WWPN is the World Wide Port Name of the adapter to which you want to load firmware MAC is the MAC address of the NIC or iSCSI port to which you want to load firmware FileName is the File name of the firmware image to load this is the filename specified in Step 1 Updating the Emulex CEE Menlo firmware FCoE adapters include an additional chip component that requires the latest supported firmware This chip is commonly referred to as a CEE converged enhanced ethernet or Menlo chip the purpose of which is to handle the convergence of storage FC and network IP traffic over a single ethernet interface To update the CEE Menlo firmware on the CNAs 1 Copy the appropriate CEE Menlo firmware file from the CD ROM or the EMC approved section of the Emulex website to the appropriate directory usr sbin hbanyware The CEE firmware file for Emulex LP2100x CNA is cee 0 5a5 bin 2 Execute the command usr sbin hbanyware hbacmd ListHBA and record the Port WWN information for all adapters within the host 3 Execute the command usr sbin hbanyware hbacmd CEEdownload WWPN Filename where WWPN is the first Port WWN recorded from Step 2 and Filename is the firmware file name listed in Step 1 FCoE initiator co
94. able to get response from the target iSCSI Connectivity For ESX 3 x proceed to Set up the service console optional for ESX 4 x not applicable for ESXi 5 0 on page 54 ESX 3 x requires a vmkiscsid daemon which resides in the service console to initiates sessions and handles login and authentication The actual I O goes through VMkernel Set up the service console optional for ESX 4 x not applicable for ESXi 5 0 To set up the service console complete the following steps 1 In Configuration tab networking section under the vSwitch that was created click Properties and then click Add 2 In Connection Type select Service Console and click Next The Add Network Wizard dialog box appears as shown in the next figure Add Network Wizard Service Console Conmection Settings Use network labels to idertily Service Console connections whe managing your host 3 Under Port Group Properties select or enter a network label 4 Under IP Settings enter the adapter IP address and subnet mask for the service console 5 Set the Service Console Default Gateway 6 Click Next 54 EMC Host Connectivity Guide for VMware ESX Server 7 Review the Summary and if all of the settings are correct click Finish 8 To verify SW iSCSI initiator network has been correctly set up open a console by issuing the following command vmkping lt target_ip_address gt You should be able to get respon
95. abler To use Solution Enabler complete the following steps 1 Install Solution Enabler for ESX host Note A license is required 2 Goto the default location cd opt emc SYMCLI V6 5 1 bin Run symcfg discover 4 Subsequently run symcfg list Ensure the attached VMAX or Symmetrix have a local attachment 6 Verify the ESX host is visible to the array by issuing the following command iSCSI symmask sid lt last 3 digits of the symm frame gt list logins dir lt dir number gt p lt port number gt more A initiator will be shown as Logged In after a LUN has been masked to the initiator Refer to the EMC Symmetrix Management Console Online Help for information on how to mask a LUN Using Symmetrix Management Console SMC To use the SMC complete the following steps 1 Log in to the SMC from a browser Note Disable pop up blockers 2 Expand the Symmetrix Arrays folder on the left panel and then select the array connected 3 Expand the Symmetrix Masking folder 4 Expand the Host Bus Adapters folder 5 Expand the iSCSI folder The initiator should be listed with a record for every target connected Ensure that the HBA and Port Flags in effect are correctly set Similarly an initiator will be shown as Logged In only after a LUN has been masked to the initiator as shown in the next figure Refer to Symmetrix Management Console online help available on on http support
96. ad only A reboot of the VM is required to resume the write access root VP rhel5 lsi test rm large tmp rm remove regular file large tmp y rm cannot remove large tmp Read only file system 5 Check the following logs on yout ESX Server and VM a The vmkernel logs on the ESX server var log vmkernel show the expected 0x7 0x27 0x7 error code EMC Host Connectivity Guide for VMware ESX Server Virtual Provisioning Log snippet Jun 22 04 07 50 SGELVMW169 vmkernel 3 12 15 11 875 cpu0 4104 NMP nmp CompleteCommandForPath Command 0x2a 0x41027 494140 to NMP device naa 6006016035c01 c0050f d82bcef79df11 failed on physical path vmhba2 C0 T0 L15 H 0x0 D 0x2 P 0x0 Valid sense data 0x7 0x27 0x7 Jun 22 04 07 50 SGELVMW169 vmkernel 3 12 15 11 875 cpu0 4104 ScsiDevicelO 1672 Command 0x2a to device naa 6006016035c01c0050fd82bcef79df11 failed H 0x0 D 0x2 P 0x0 Valid sense data 0x7 0x27 0x7 Jun 22 04 07 50 SGELVMW169 vmkernel 3 12 15 11 875 cpu0 4096 NMP nmp CompleteCommandForPath Command 0x2a 0x41027 f452840 to NMP device naa 6006016035c01c0050f d82bcef79df11 failed on physical path vmhba2 C0 T0 L15 H 0x0 D 0x2 P 0x0 Valid sense data 0x7 0x27 0x7 Jun 22 04 07 50 SGELVMW169 vmkernel 3 12 15 11 875 cpu0 4096 ScsiDevicelO 1672 Command 0x2a to device naa 6006016035c01c0050fd82bcef79df11 failed H 0x0D 0x2 P 0x0 Valid sense data 0x7 0x27 0x7 b The host messages logs on the ESX Server
97. addressing on page 95 Each port on a device attached to a fabric is assigned a unique 64 bit identifier called a World Wide Port Name WWPN These names are factory set on the HBAs in the hosts and are generated on the Fibre Channel directors in the VMAX or Symmetrix system EMC Host Connectivity Guide for VMware ESX Server SCSI 3 FCP addressing Note For comparison to Ethernet terminology an HBA is analogous to a NIC card and a WWPN toa MAC address Note The ANSI standard also defines a World Wide Node Name WWNN but this name has not been consistently defined by the industry When an N_Port host server or storage device connects to the fabric a login process occurs between the N_Port and the F_Port on the fabric switch During this process the devices agree on such operating parameters as class of service flow control rules and fabric addressing The N_Port s fabric address is assigned by the switch and sent to the N_Port This value becomes the source ID SID on the N_Port s outbound frames and the destination ID DID on the N_Port s inbound frames The physical address is a pair of numbers that identify the switch and port in the format s p where s is a domain ID and p is a value associated to a physical port in the domain The physical address of the N_Port can change when a link is moved from one switch port to another switch port The WWPN of the N_Port however does not change A Name Server in t
98. age Arrays OSelected 18 tems Create User service Figure 36 Add ports to storage view EMC VPLEX 137 Array Information and Configuration 138 4 Adding virtual volumes to the storage view as shown in Figure 37 i https 172 23 179 195 smsflex VPlexConsole html Microsoft Internet Explorer Provision Storage Y E Distributed Storage Distributed Devices v y Clusters v duster 1 v Hosts d Intiators Ports v 3 Virtualized Storage MC Virtual Volumes Devices Extents Y 9 Physical Storage Storage Volumes Storage Arrays v G duster 2 v orosts Qi Storage Views Inliators rots v Bo Virtualized Storage sp Virtual Volumes Devices Extents v 9 Physical Storage Storage Volumes Storage Arrays i File Edt View Favorites Tools Help Q O el E Gh Pear ferres Need help on this step Select the unexported virtual volumes for this storage view and click Next to continue You can also click Finish to finish the operation and create the storage view without adding all components 3 Select Virtual Volumes Address https 172 23 179 195 smsflex PlexConsole html Available Unexported Virtual Volumes Virtual Volume Name N large array raid 479 vol wj large array raid 480 vol wp large array raid 481 vol Mj large array raid 482 vol WP large array raid 483 vol WP large array raid 484 vol wp large array raid 485 vol W
99. al connection schemes for 8 paths Switch A SC Switch B e Four X Brick Cluster A host may have up to 16 paths per device 142 EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration The following figure displays the logical connection scheme for 4 paths The following figure displays the logical connection scheme for 8 paths EMC XtremlO Array Information and Configuration The following figure displays the logical connection scheme for 16 paths Switch A Switch B Configuring a VMware vSphere host This section describes the required procedures for configuring a VMware vSphere host running ESX with the XtremIO Storage Array Note VMware and vSphere are trademarks or registered trademarks of VMware Note The XtremIO Storage Array supports both VMware ESX and VMware ESXi For simplification all references to ESX server host apply to both ESX and ESXi unless stated otherwise Host parameters This section details the ESX host parameters settings necessary for settings optimal configuration when using XtremIO storage Note The following setting adjustments may cause hosts to over stress other arrays connected to the ESX host resulting in performance degradation while communicating with them To avoid this in mixed environments with multiple array types connected to the ESX host compare these XtremIO rec
100. apter 30 D data migrations 181 EMC storage devices connecting with ESX Server 24 Emulex FCoE CNAs configuring 71 ESX Server connecting with EMC storage 24 F failover modes VNX CLARiiON 106 FCoE initiator configurations 71 Fibre Channel 34 Fibre Channel over Ethernet FCoE configuring Emulex CNAs 71 configuring QLogic FCoE CNAs 80 H Host Bus Adapters 28 installation recommendations 32 intallation 26 intallation methods 26 iSCSI cards 29 iSCSI protocol 50 L LUN trespass manual 117 M manual LUN trespass 117 Master Boot Record MBR 232 EMC Host Connectivity Guide for VMware ESX Server 237 238 media 26 Menlo chip 75 82 multipathing in VMWare ESX ESXi 4 x ESXi 5 x 172 P path fixed 168 path Most Recently Used MRU 168 pool capacity monitoring adding deleting 189 PowerPath VE 177 Q QLogic FCoE CNAs configuring 80 R Raw Device Mapping RDM 229 Raw disk 228 RDM Raw Device Mapping 229 S SCSI 3 Controller Commands SCC 95 SCSI 3 FCP addressing 95 service console 20 SMC Symmetrix Management Console 99 Solution Enable using 65 Symmetrix addressing 94 booting the VMware ESX Server 232 connectivity 34 Volume Logix device 232 Symmetrix connectivity 34 Symmetrix Management Console SMC 99 Symmetrix Management Console SMC 66 Symmetrix storage arrays booting from 232 Symmetrix connectivity 64 T thin provisioning 188 U UCS Unified Computing
101. as mirroring or striping Devices can only be created from extents or other devices A device s storage capacity is not available until you create a virtual volume on the device and export that virtual volume to a host You can create only one virtual volume per device There are two types of devices Simple device A simple device is configured using one component which is an extent Complex device A complex device has more than one component combined using a specific RAID type The componenes can be extents or other devices both simple and complex Distributed devices Distributed devices are configured using storage from both clusters and therefore are only used in multi cluster plexes A distributed device s components must be other devices and those devices must be created from storage in different clusters in the plex Rule sets Rule sets are predefined rules that determine how a cluster behaves when it loses communication with the other cluster for example during an inter cluster link failure or cluster failure In these situations until communication is restored most I O workloads require specific sets of virtual volumes to resume on one cluster and remain suspended on the other cluster VPLEX provides a Management Console on the management server in each cluster You can create distributed devices using the GUI or CLI on either management server The default rule set used by the GUI makes the cluster used t
102. at support ALUA and all VNX series NMP supports ALUA only with FLARE 28 and later Note For more information on ALUA refer to VNX series and CLARiiON failover modes on page 106 Active Active In Active Active arrays such as VMAX or Symmetrix dynamic load balancing can be achieved through PowerPath Active Active means all paths are equal and can be used simultaneously to achieve optimum performance With Fixed policy the VMware ESX Server uses all paths to the storage device for I O processing to implement and support dynamic load balancing Active Passive In Active Passive arrays such as VNX series and CLARiiON dynamic load balancing is achieved through PowerPath To implement dynamic load balancing the VMware ESX Server host uses all active paths to the owning storage processor drastically increasing performance A trespass is forced to the other storage processor only when no paths are available on owning SPs or no access to devices via those owning SPs PowerPath will automatically restore the I O back to the active paths on the default storage processor owner The trespass back to the original path is done automatically through PowerPath PowerPath VE will have the same load balancing behavior whether the host is registered under failover mode 4 ALUA or failover mode 1 Active Passive PowerPath commands PowerPath commands can be executed only from the remote ESX host but it is recommended to use the
103. ata pertaining to a raw device This mapping file becomes a symbolic link from the VMFS volume to the raw LUN Instead of allocating the raw LUN to a Virtual Machine the mapping file is now used The metadata contained in the mapping file is used to manage and redirect disk access to the physical device Raw Device Mappings RDM EN Raw Disk Usage and Mapping 230 EMC Host Connectivity Guide for VMware ESX Server Boot from SAN This appendix provides information about booting the VMware ESX Server from VMAX Symmetrix VNX series and CLARiiON systems Booting from VMAX or Symmetrix storage arrays 232 Booting from VNX series and CLARiiON storage systems 234 Boot from SAN 231 Boot from SAN Booting from VMAX or Symmetrix storage arrays For VMware ESX Server v2 5 x and later hosts have been qualified for booting from VMAX or Symmetrix devices interfaced through Fibre Channel as specified in the EMC Support Matrix Booting from SAN is supported for VMware s native failover functionality and PowerPath VE but currently only ESX v4 0 supports PowerPath VE Note If VMware s native failover functionality is not used and it is necessary to use a VMAX or Symmetrix device as a boot disk you should shut down the Virtual Machines and the ESX Server during any maintenance procedures that might cause the boot disk to become unavailable to the host The VMAX or Sy
104. ate 1 v LUN Name Name ThinProvisioning_LUN Automatically assign LUN 10s as LUN Names Apply Cancel Help 3 Inthe General tab set User Capacity to 1 TB which is larger than the available capacity The Thin box must be checked otherwise an error prompt will stop creation of this kind of thin LUN Name the LUN ThinProvisioning LUN EMC Host Connectivity Guide for VMware ESX Server Virtual Provisioning 4 When thin LUN is created check the Storage Pool properties again since the LUN oversubscribed the Storage Pool volume The percent and capability oversubscribed figures can be found at bottom right This Storage Pool is 114 192 subscribed on virtual capacity General Disks Advanced Twenng Properbes Name Thin Pod 1 Descnpbon 10 1 RAID Type RAID1 0 State Ready Disk Type SATA II gre Physical Capacity Virtual Capacity Percent Full 0 218 Percent Subscribed 114 192 Consumed Capacity 2 000 GB Total Subscribed Capacity 1046 064 G amp User Capacity 16 0 6 Gb Oversubscribed By 130 008 GB 5 In the LUNs tab add the thin LUN to the ESX server in VNX series and CLARiiON Unisphere General LUNs Hosts Show LUNs Not in other Storage Groups v Available LUNs Name 10 Capacity V Orive Type T MetaLUNs SPA 1 SPB Gg Thin LUNs E ThinProvisioning_LUN 32 1024 000 GB N A E Thin LUN 1317 1317 25 000 G8 N A E Thin Li 1321 43225 146 000 ca wen Mi Add Selected
105. basis SPC 2 must be enabled for all initiators on a per host basis globally so if SPC 2 conformance is enabled for a specific VMAX or Symmetrix LUN visible to a specific host SPC 2 conformance must be enabled for all paths to that same LUN and from that same host Offline and online migrations from SPC 2 disabled to SPC 2 enabled configurations are discussed in the white paper Enabling SPC 2 Compliancy on EMC Symmetrix DMX Devices Connected to VMware VI3 Environments available on http support EMC com VMAX and Symmetrix array configurations Array Information and Configuration VMAX or Symmetrix director bit settings When attaching a VMware ESX Server host to a VMAX or Symmetrix storage array the following director settings must be enabled as shown in Table 6 Table 6 VMAX or Symmetrix director bit setting for ESX Server environments FC with switch attached FC with direct attached DAS iSCSI connections FC SW ESX 5 C VCM SC3 UWN PP SPC 2 C VCM SC3 UWN SPC 2 C SC3 UWN SPC 2 ESX 4 x C VCM SC3 UWN PP SPC 2 C VCM SC3 UWN SPC 2 C SC3 UWN SPC 2 ESX 3 x C VCM SC3 UWN PP SPC 2 C VCM SC3 UWN SPC 2 C SC3 UWN SPC 2 The bit settings can be configured using EMC Symmetrix Management Console or Solution Enabler Refer to the following EMC VMAX and Symmetrix DMX Simple Support Matrices located at http elabnavigator emc com for the most up to date info
106. be presented to the ESX Server Refer to the latest EMC Support Matrix for the most up to date information on which EMC arrays that are supported via iSCSI attach to VMware ESX Server 3 x and 4 x Enable services and agents in the initiator firewall Configure the service console firewall to accept services and installed management agents enabling the services and agents to access the ESX Server by completing the following steps 1 Log in to the VMware VI client as administrator 2 Inthe VI Client under Hosts and Clusters click the server 3 Click the Configuration tab and then click Security Profile 4 Click Properties to open the Firewall Properties dialog box This dialog box lists services and management agents 5 If not already checked enable the software iSCSI Client by checking the box 6 Click OK Once the iSCSI initiator ports on the ESX Server are configured iSCSI storage must be presented to the ESX Server Refer to the latest EMC Support Matrix for the most up to date information on which EMC arrays that are supported via iSCSI attach to VMware ESX Server 3 x Network configurations for ESX 4 x and ESXi 5 0 In a two or more NICs environment SW iSCSI may be set up through the use of a single vSwitch or dual vSwitch network configuration as shown in Figure 13 on page 57 In ESX 4 x and ESXi 5 0 a single vSwitch containing two NICs can be configured to use NIC teaming or port binding to provide failover
107. bolic name uucp 64 Oct 11 01 24 50 localhost syslogd symbolic name news 56 Oct 11 01 24 50 localhost syslogd leading char in action Oct 11 01 24 50 localhost syslogd filename var log spooler Oct 11 01 24 50 localhost syslogd Called allocate log nlogs 5 Oct 11 01 24 50 localhost syslogd cfline local7 var log boot 1log Oct 11 01 24 50 localhost syslogd symbolic name 255 Oct 11 01 24 50 localhost syslogd symbolic name local7 184 Oct 11 01 24 50 localhost syslogd leading char in action Oct 11 01 24 50 localhost syslogd filename var log boot log Oct 11 01 24 50 localhost syslogd Opened UNIX socket dev log Oct 11 01 24 50 localhost syslogd 0 7F 7F X 7F 7F 7F 7F 7F 7F X X 7F 7F 7F 7F 7F 7F 7F 7F 7F 7F 7F 7F 7F 7F FILE var log messages Oct 11 01 24 50 localhost syslogd 1 X X X X X X X X X XFF X X X X X X X X X X X X X X FILE var log secure Oct 11 01 24 50 localhost syslogd 2 X X FF X X X X X X X X X X X X X X X X X X X X X X FILE var log maillog Virtual Provisioning with VNX series and CLARIION 197 Virtual Provisioning Oct 11 01 24 50 localhost syslogd 3 X X X X X X X X X FF X X X X X X X X X X X X X X X FILE var log cron Oct 11 01 24 50 localhost syslogd 4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 l 1 1 i 1 L1 1d Li 1 1 A WALL Oct 11 01 24 50 localhost syslogd 5 X X X X X X X 7 7 X X X X X X X X X X X X X X X X FILE var log spooler Oct 11 01
108. both VMkernel port and service console to be on the same vSwitch as shown in Figure 12 Virtual Switch wSwitchz Remove Properties Service Console Port Physical Adapters Vd service Console 3 e EM vmnicS 10000 Ful ra vswifz 11 10 10 253 ViMkemel Port d wMkernel 2 e 11 10 10 249 VMkernel and service console on a single vSwitch Supports NIC teaming While configuring a software iSCSI initiator using NICs VMware recommends NIC teaming to provide failover between multiple NICs which means the NICs are on the same vSwitch using the same IP address Supports both send and static targets discovery Requires only VMkernel port to be in the network configuration e Supports multipathing iSCSI E Setting up SW iSCSI Supports NIC teaming Supports ALUA failover mode 4 for VNX series and CLARiiON system Supports EMC Powerpath VE Note For more information about EMC PowerPath VE refer to http www emc com To set up the SW iSCSI initiator complete the following steps Note Some steps are optional as noted depending on the ESX version used Each step is outlined in more detail in this section 1 Setup VMkernel on page 52 2 Setup the service console optional for ESX 4 x not applicable for ESXi 5 0 on page 54 3 Enable the SW iSCSI configuration on page 55 4 Add send targets on page 55 5 Add static targets not applicable for ESX 3 5 on page 55 Set up VMkern
109. c SLI Rev 3 NPIV Supported VPIs max 100 VPIs used 0 RPIs max 512 RPIs used 7 Vport List Link Up Ready PortID 0x230015 Fabric Current speed 4G Physical Port Discovered Nodes Count 2 t01 DID 241b00 State 06 WWPN 50 06 04 82 d5 0 c8 d1 WWNN 50 06 04 82 d5 0 c8 d1 t00 DID 282d00 State 06 WWPN 50 06 04 82 d5 0 c8 de WWNN 50 06 04 82 d5 0 c8 de The WWPNs and WWNs of the array ports are bolded VNX series and CLARiiON connectivity Note Refer to the EMC Support Matrix or contact your EMC representative for the latest information on qualified hosts host bus adapters and connectivity equipment EMC Access Logix must be installed on the VNX series and CLARiiON storage system to which the VMware ESX Server is being attached VMware ESX Server uses the Linux version of the Navisphere Agent CLI The Naviagent must be loaded on the Service Console of the ESX EN EMC Host Connectivity Guide for VMware ESX Server Example 1 Server while the Naviagent CLI is supported on both the Service Console and the Virtual Machines In VMware ESX Server 4 x both native multipath software and EMC PowerPath VE automatically perform registration to VNX series and CLARiiON systems Unisphere Navisphere Agent is not required on VMware ESX Server 4 x VMware ESX Server owns the HBAs not the operating systems running in the virtual machines As a result the VMware ESX Server s HBAs will be registered on the VNX s
110. capabilities Port binding can be enabled by overriding vSwitch failover order such that each NIC is only bound to one VMkernel port Refer to Set up 1 1 VMkernel to network adapters mapping on page 61 for steps to perform port binding EMC Host Connectivity Guide for VMware ESX Server Connectivity Note EMC recommends having the two NICs VMkernel ports are on different subnets Ensure the SP ports belonging to the same Storage Processor are also on different subnets SPA portO 10 14 19 45 SPA port 1 10 14 17 54 SPB port 0 10 14 19 46 SPB port 1 10 14 17 55 Virtual Switch vSwitcht Remove Properties VMkermel Port Physical Adapters C wMkernel 2 e Kg v3 1000 Ful C vmk1 10 14 19 22 KB vmnk2 1000 Ful 2 VMkernel Port C3 vMkernel e vmko 10 14 17 80 Figure 13 Two NICs on a single vSwitch iSCSI configuration iSCSI Connectivity Likewise two vSwitches can be created on ESX 4 x and ESXi 5 0 and each vSwitch can be bound to one or more NICs as shown in Figure 14 SPA port 0 10 14 19 45 SPB port 0 10 14 19 46 SPA port 1 10 14 17 54 SPB port 1 10 14 17 55 Virtual Switch vSwitchl Remove Properties VMcemal Port Physkal Adapters C wMkernel 2 e Kg veoc2 1000 Ful C vmk1 10 14 19 22 Virtual Switch vSwitch2 Remove Properties VMcernal Port Physical Adapters O vMkernel e Kj vmnic3 1000 Ful 2 vtnkO 10 14 17 80 Figure 14 Two NICs in dual vSwitch iSCSI con
111. celerated VMFS data italaaoon requires compliant hardware Mm o Maec 1 Enable hardware accelerated VMFS data mowemert requires complet hardware Mn o Max 1 9 Ifany of the above parameters are not enabled adjust them and click OK in the corresponding Advanced Settings section to apply the changes Disabling VAAI In general it is recommended that you enable VAAI features for features optimal performance when using vSphere with XtremIO Storage However in some cases mainly for testing purposes it is necessary to disable VAAI Note For further information about disabling VAAI refer to VMware KB article 1033665 on the VMware website http kb vmware com selfservice microsites search do language en US amp cmd displayKC amp externalld 1033665 To disable VAAI on the ESX host complete the following steps 1 Launch the vSphere client and navigate to Inventory gt Hosts and Clusters 150 EMC Host Connectivity Guide for VMware ESX Server In the Inventory section locate and select the ESX host Click the Configuration tab Under Software click Advanced Settings Select DataMover nT F SP N Modify the DataMover HardwareAccelerated Move parameter to disabled that is set to 0 7 Modify the DataMover HardwareAcceleratedInit parameter to disabled that is set to 0 8 Select VMFS3 9 Modify the VMFS3 HardwareAcceleratedLocking parameter to disabled that is set to 0 10
112. comes available ESX ESXi Server uses the single most recently used path to the disk If this path becomes unavailable the host switches to an alternative path and continues to use the new path while it is available There is no automatic failback in the MRU policy The ESX ESXi host does not automatically revert back to the preferred path Round Robin uses an automatic path selection rotating through all available paths and enabling load balancing across the paths For Active Passive arrays only active paths on the owning SP will be 168 EMC Host Connectivity Guide for VMware ESX Server esxcfg mpath l Multipathing in VMware ESX ESXi Server used in RR For Active Active but not ALUA arrays all paths will be used in RR VMware ESX Server v3 5 and ESX Server v3i enhanced the native multipathing functionality by providing RR policy from ESX ESXi Server v3 5 and later Note PowerPath is supported and available only from v4 0 Service console is unsupported for ESXi Access to PowerPath is available only through the remote powermt rpowermt application not locally This CLI is available for installation on a Linux or Windows host physical or virtual VMware ESX Server v3 x has introduced the esxcfg mpath 1 command to view the configuration and status of the paths of device This command is executed from the service console command line and returns the path information for a LUN device Overview Multipathing i
113. cron none var log messages Virtual Provisioning on VMware ESX v4 x This secton contains the following information Setup on page 198 Preparation on page 199 Setup OS and HBAs e ESX 4 1 RTM and Emulex LPe11002 E 4 Gb 2 port HBA Switch e Brocade DS 5000B e Array e CX4 480 FLARE 30 e Thin_Pool_1 RAID 1 0 1TB 916 056GB consumed SATA EMC Host Connectivity Guide for VMware ESX Server Preparation Complete the following steps 1 Create a Storage Pool with the name Thin Pool 1 by using two available disks The free capacity is 916 056 GB Pools Ve Fier for RAIO Type al Name Thin_Pool_o off Ready RAIDS RAID Type Drive Type User Capacity Free Capacity Allocated GB eConsunved Subscribed GD SATA II 1830 112 1598 098 232014 m 1 Selected Creste Delete Properbes Expand 2 Right click the Storage Pool and choose Create LUN The General tab displays Virtual Provisioning with VNX series and CLARIION Virtual Provisioning Virtual Provisioning General advanced Storage Pool Properties Storage Pool Type Pool RAID Type Storage Pool for new LUN r Capacity Available Capacity 916 056 GB RAID1 0 Mirrored Redundant Individual Ac v Thin Pool 1 v RAID Group New Consumed Capacity 0 000 GB rLUN Properties vi Thin User Capacity 1 v TB v LUN ID 32 w Number of LUNs to cre
114. d Checking Device Reservations sse Allowed Initiating COMMIT of configuration changes Queued EMC Host Connectivity Guide for VMware ESX Server Virtual Provisioning COMMIT requesting required resources Obtained Go alse COMME Rm Done New symdevs 1900 1903 Terminating the configuration change session Done The configuration change session has successfully completed 2 Verify that the data device has been created symdev sid sid list datadev nonpooled symdev sid 098 list datadev nonpooled Symmetrix ID 000194900098 Device Name Directors Device Cap Sym Physical SA P DA IT Config Attribute Sts MB 1900 Not Visible 2 07A DD 2 Way Mir N A DT RW 4097 1901 Not Visible 2 08B CA 2 Way Mir N A DT RW 4097 1902 Not Visible 2 08A DA 2 Way Mir N A DT RW 4097 1903 Not Visible 2 07A CC 2 Way Mir N A DT RW 4097 Note the differences between data devices and thin devices Data devices have attributes of DT RW status and the back end directors are listed Step 3 Create a thin pool A thin pool named TD is created symconfigure sid sid cmd create pool poolName type thin commit nop symconfigure sid 098 cmd create pool TP type thin commit nop A Configuration Change operation is in progress Please wait Establishing a configuration change session Established Per
115. d ESXi 5 x Compared to ESX 3 5 it eliminates some limitations enhances performance and adds some new capabilities Improvements include following Full GUI based administration is used instead of command line based Storage VMotion between different storage types of FC ISCSI and NFS are all allowed e Moving thick VMDKs to thin format is supported e Migration from VMDKs to RDMs is allowed as well as the reverse Change block tracking is used to enhance the performance and efficiency of the migration process so 2x memory and CPU is no longer required For details to perform Storage VMotion refer to the vSphere Basic System Administration Manual available at http www VMware com 184 EMC Host Connectivity Guide for VMware ESX Server Virtual Provisioning This chapter provides information about EMC Virtual Provisioning and VMware ESX Note For further information regarding the correct implementation of EMC Virtual Provisioning refer to the Symmetrix Virtual Provisioning Implementation and Best Practices Technical Note available on http support EMC com Virtual PrOVISIODInE iiic e epe egeta ntt ee rsen een 186 Traditional versus Virtual thin Provisioning 188 Monitoring adding and deleting pool capacity 189 Virtual LUN architecture and features 191 e VMware Virtual Machine File System with thin LUN 193
116. d On FCID In Fabric generated Port Nane User Type Node Nane 10009990c 9345180 1086 934bicS 1 Hae 93d 1 Hie 93d NULL NULL 10000008c974d189 1 HOHNER II4bicsS 1 88000008C 73 10000006c97 Fibre d2900 Ye Fibre Fibre Fibre d3Ube Fibre Fibre Fibre NU Fibre NU Fibre NU Fibre NU Fibre Fibre F F F NUL NULL k ibre ation FA 16C 8 generated Port User Type Node Mane Nane Fabric ibre ibre ibre ibre ibre ibre LOGGSCSRc734b188 10AHAAAAcII4b18A 1 88 EKLIODCUEESU i LV KLIDE 1 nae 93d ni 1HHHNNN I Id sHal 100000888c93dI3 be 10800000Gc9IdIAbe NULL NULL NULL NULL 41308 LED Fffc2 ation FA 16C 1 User Node Mane generated Type Port Nane FCID Fabric Fibre NULL NULL fffctii CLI verification screen From the host side running the following commands lists the details of the initiator ports For Emulex HBA cat proc scsi lpfcdd N Fibre Channel For QLogic HBA cat proc scsi qla2xxx N where N indicates the file for each adapter in the system These commands also lists the WWNs and WWPNs of the array ports that the host has been zoned to For example cat proc scsi lpfc820 6 Emulex LightPulse Fibre Channel SCSI driver 8 2 0 30 49vmw Emulex LP21000 M 10GE PCIe FCoE Adapter on PCI bus 0b device 00 irq 137 BoardNum 0 Firmware Version 1 00A5 A3D1 00A5 Portname 10 00 00 00 c9 3c 8 1c Nodename 20 00 00 00 c9 3c 8 1
117. denied ema eie mie 125 Provisioning and exporting storage sirsenis 125 Storage volutnes sioceasecn tese cernere ter rteses 128 EM EMC Host Connectivity Guide for VMware ESX Server System Voliumes eite tet titii pie Ae 130 Required storage system setup seieriga 131 Required VMAX or Symmetrix FA bit settings 131 Supported storage arrays 132 Initiator settings on back end arrays sss 133 Host connectivity sesssssssseeeeenenene entente 133 Exporting virtual volumes to hosts sssrini 133 Front end patlisicao winders i erben perra tenet 138 Configuring VMware ESX hosts to recognize VPLEX iur 140 EMC XtremlQ ierra rnnt rrr nter e cereis 141 Best practices for zoning and subnetting 141 Configuring a VMware vSphere host 144 Configuring Fibre Channel HBA sss 151 Configuring multipath software 156 File system and application requirements 162 Chapter 5 Multipathing in VMware ESX ESXi Server uu 168 Path poliCies ect terere rites peines 168 Multipathing in VMware ESX Server with VMAX or SY NUNC EIN d 170 Multipathing in VMware ESX 3 x with CLARION 171 Native multipathing in VMware ESX ESXi 4 x and ESXi 5 x 172 VMAX or Symmetrix policy sss 172 VNX series and CLARION policy
118. devs 4 12 adc E ad hae es M NE US NU Done Committing configuration changes eee eee Committed Terminating the configuration change session Done The configuration change session has successfully completed Alternatively data devices can be added in a disabled state symconfigure sid lt sid gt cmd add dev lt TDATID gt lt TDATID gt to pool lt poolName gt type thin member_state Disable commit nop symconfigure sid 98 cmd add dev 1903 to pool TP type thin member_state Disable commit nop A Configuration Change operation is in progress Please wait Establishing a configuration change session Established Performing Access checksS 0c cee eee eens Allowed Checking Device Reservations see Allowed Locking devices rererere et s SH ede xu dre pU ea ow awe s ce S weg wai Locked Committing configuration changes esee Reordering Adding pool dewS ccu e ye IRE REUS EUROS EU ea X OY XOR ACER Ree E Done Committing configuration changes eee Committed Terminating the configuration change session Done The configuration change session has successfully completed Data devices can be dynamically added and enabled at a later stage when the pool threshold is low symconfigure sid sid cmd enable dev lt TDATID gt in pool lt poolName gt type thin commit nop EN EMC Host Connectivit
119. disabled by default To make the most of VAAI with vSphere 4 1 you should install and enable the XtremIO ESX 4 1 VIB on the ESX server To download the XtremIO ESX 4 1 VIB complete the following steps 1 Access the EMC support page for XtremIO to acquire the XtremIO ESX 4 1 VIB package 2 Download the XtremIO ESX 4 1 VIB To enable VAAI with XtremIO storage perform on each ESX 4 1 host 1 Upload the XtremIO ESX 4 1 VIB to the ESX host using SCP 2 Open an SSH session to the host as root 3 Install the XtremIO ESX 4 1 VIB on the ESX server using the following command esxupdate update bundle lt path on server loffline bundle zip nosigcheck 4 Reboot the ESX host 5 Open an SSH session to the host as root 6 Load the XtremIO VAAI module using the following command vmkload mod xio vaaip 7 Verify the XtremIO module is successfully loaded using the following command vmkload mod 1 8 Create the VAAI rules and filters using the following commands e esxcli corestorage claimrule add autoassign type vendor vendor XtremIO model pluginzVAAI FILTER claimrule class Filter esxcli corestorage claimrule add autoassign type vendor vendor XtremIO model plugin XIO VAAIP claimrule class VAAI EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration e esxcli corestorage claimrule load claimrule class all esxcli corestorage claimrule run claimru
120. e message displays I O failed after a long attempt since it reached the thin pool limit Once it filled up 22 GB free space in thin LUN there are I O error messages The junk files are still filling up the thin LUN but it became extremely slow Part of the log messages are Write files to fill the LUN or use the dd command to generate I O on the thin LUN dd if dev zero bs 307200k count 1024 of large tmp Virtual Provisioning Oct 11 01 24 47 localhost kernel SCSI error lt 0 0 10 0 return code 0x8000002 Oct 01 24 47 localhost kernel Current sdj sense key Hardware Error Oct 11 01 24 47 localhost kernel Additional sense Internal target failure Oct 11 01 24 47 localhost kernel end_request I O error dev sdj sector 44350711 Oct 11 01 24 47 localhost kernel SCSI error lt 0 0 10 0 return code 0x8000002 Oct 01 24 47 localhost kernel Current sdj sense key Hardware Error Oct 11 01 24 47 localhost kernel Additional sense Internal target failure Oct 11 01 24 47 localhost kernel end_request I O error dev sdj sector 44322055 Oct 11 01 24 47 localhost kernel SCSI error lt 0 0 10 0 return code 0x8000002 Oct 01 24 47 localhost kernel Current sdj sense key Hardware Error Oct 11 01 24 47 localhost kernel Additional sense Internal target failure Oct 11 01 24 47 localhost kernel end request I O error dev sdj sector 44354735 Oct 01 24 50 localhost syslogd sym
121. e such as Putty can be enabled The root user can modify settings for ssh Telnet and FTP using the security configuration page in the management interface http lt servername gt security config or edit the ssh configuration file directly through service console It is recommended that you not run resource consuming tasks on the service console since it competes with other virtual machines for processor cycles in VMkernel scheduling With the Service Console removed vSphere 5 has a reduced hypervisor code base footprint less than 150 MB vs ESX s 2 GB vSphere 5 completes the ongoing trend of migrating management functionality from the local command line interface to remote management tools With vSphere 5 instead of using the Service console VMware created remote command lines such as the vSphere Command Line Interface vCLI and PowerCLI to provide command and scripting capabilities in a more controlled manner These remote command line sets include a variety of commands for configuration diagnostics and troubleshooting For low level diagnostics and the initial configuration menu driven and command line interfaces are available on the local console of the server EN EMC Host Connectivity Guide for VMware ESX Server Useful VMware ESX ESXi Server utilities and functions Utility Function This section describes useful utilities and functions for ESX Server 3 x 4 x and ESXi 5 Table 2 describes useful utiliti
122. e such as a failure of an HBA or SP Multipathing can be used to provide path redundancy for the ESX platform The VMware ESX Server provides support for native multipathing software as a feature of the OS VMware ESX ESXi Server provide automated failover capabilities for HBAs and SPs VMware ESX ESXi Server native multipathing support multiple paths to a variety of Active Active and Active Passive array types such as VMAX Symmetrix VNX series and CLARiiON However dynamic load balancing software such as PowerPath is not supported in VMware ESX Server prior to v4 0 Native multipathing functionality is supported with Active Active and Active Passive arrays and is meant mainly for failover ESX ESXi multipath supports the presentation of storage from multiple heterogeneous storage arrays concurrently LUN trespassing is supported for Active Passive arrays such as VNX series and CLARiiON However there are alternate path policies that can be implemented MRU Most Recently Used and Fixed path policy available in v3 x v4 x and v5 x and round robin RR path policy available beginning in v4 0 The following policies are briefly described This is the default policy for Active Active array and always uses the single preferred path to access the disk An alternate path is taken only when the preferred path is not available to access the disk The ESX ESXi host automatically reverts back to the preferred path as soon as the path be
123. e 200 Active a FLARE Operating Environm 02 07 700 5 020 Active Mirror views4 02 07 5 018 Active SANCopy 02 07 5 020 Active Y b Please select the software packages to install Browse Remove Clear All r Software To Install Selected Files b b lt lt Back Next gt finish Cancel Figure 26 Software Package Selection window The progress of the NDU may be monitored via the Software Operation Progress History window In the window each step completed will be marked with a check mark as shown in Figure 27 on page 117 116 EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration is Software Operation Progress History Cx700_3 1 70 oj xl lise Software Operation o9o Operation Install i ce Packages FLARE perating Environnent Software Operation Progress History Disabling the cache Running check scripts Y Installing software on secondary SP Y Quiescing I O on secondary SP7 Deactivating software on secondary SP Activating software on secondary SP Rebooting secondary SP Waiting due to DMP delay Y Installing software on primary SP7 Quiescing O on primary SP Deactivating software on primary SP Y Activating software on primary SP Rebooting primary SP Y Operation completed successfully Y Figure 27 Software Operation Progress History window Once the N
124. e Online Help Recommendations for optimal performance This section provides the recommendations and steps to achieve optimal performance with the VMAX and Symmetrix DMX Storage Arrays The recommended multipathing setting is Round Robin for the VMAX and Symmetrix DMX Storage Array families Set the Multipath Round Robin policy for the I O operation limit to a value of 1 for optimal performance with VMAX or Symmetrix Storage Arrays Currently the VMware default value for this setting is 1000 To change the default policy setting from Fixed to Round Robin for EMC VMAX or Symmetrix devices complete the following steps 1 Open the vSphere CLI recommended or the service console 2 Run the following command For ESXi 5 x esxcli storage nmp satp set default psp VMW_PSP_RR satp VMW_SATP_SYMM For ESXi ESX 4 x esxcli nmp satp setdefaultpsp psp VMW PSP RR satp VMW SATP SYMM VMAX and Symmetrix array configurations 103 Array Information and Configuration Additional information Set the path switching I O operation limit to 1 for optimal performance with the EMC VMAX or Symmetrix Family Storage Arrays VMAX and DMX devices Currently the VMware default value for this setting is 1000 1 Open the vSphere CLI recommended or the service console 2 Run the following commands For ESXi 5 x To check the I O operations limit esxcli storage nmp psp roundrobin deviceconfig get
125. e TCP port number can be changed If it is not changed the default is 23333 Refer to the VMware Server Configuration Guide available on http support EMC com for more details on how to configure the ESX firewall To install the OneCommand Manager application CLI complete the followiing steps 1 Login to the ESX Server Host COS 2 Copy the elxocmcore esxNN lt kit version gt lt arch gt rpm file to a directory on the install machine 3 Goto the directory where the rpm file is located and install it Type rpm U elxocmcore esxNN lt kit version gt lt arch gt rpm where NN is 35 for an ESX 3 5 system The rpm contents are installed in usr sbin hbanyware The OneCommand Manager application CLI is also located in this directory To upgrade the firmware and Boot bios complete the following steps 1 The firmware zip file must be downloaded from the Emulex website The file must be unzipped and extracted to a folder on a local drive e The firmware and boot code for OCe10102 FM E and OCe10102 FX E CNA is S2200017 ufi e The firmware and boot code for OCe10102 DX E and OCe10102 IM E CNA is 1374004 ufi EMC Host Connectivity Guide for VMware ESX Server 2 Ifthe adapter is already connected to a boot device the system must be in a state in which this type of maintenance can be performed O activity on the bus has been stopped Cluster software or any other software that relies on the adapter to be avail
126. e driver modules in the VMkernel vm support Command used to gather information about the VMware ESX Server itself and virtual machines to assist in debugging issues or to obtain performance information for the virtual machines vmkvsitools Command used to display information about Ispci ps hwclock VMware hwinfo bootOption vmksystemswap vmkisciadmin Command used to perform iSCSI administration VMware ESX ESXi Server Introduction to VMware Infrastructure vSphere Table 3 Useful utilities and functions for VMware ESX Server 4 x page 2 of 2 Utility Function Description vmkiscsi tool Command used to configure the iSCSI software and hardware initiators and basic iSCSI management functions esxcfg mpath Command used to list all or specific paths on the system with its detailed information or to specify a specific path for operations esxcfg rescan Command used to rescan HBA or iSCSI initiators and update their status esxcli Command used to set the path policy mask paths preview and manage third party storage arrays Command can also be used to manage iSCSI NIC bindings Table 4 describes useful utilities and functions for VMware ESXi 5 Table 4 Useful utilities and functions for VMware ESXi 5 Utility Function Description vmkfstools Command to create and manipulate virtual disks file systems logical volumes and physical storage devices on an ESX
127. e operations e Six X Bricks 48 concurrent full clone operations To adjust the maximum number of concurrent full cloning operations complete the following steps 1 Launch the vSphere client to log in to the vCenter Server 2 From the top menu bar select Administration gt vCenter Server Settings 3 Click Advanced Settings 4 Locate the config vpxd ResourceManager maxCostPerHost parameter and set it according to the XtremIO cluster size If you cannot find the parameter type its name in the Key field and the corresponding value in Value field and then click Add 5 Click OK to apply the changes vStorage API for Array VAAI is a vSphere API that offloads vSphere operations such as Integration VAAI virtual machine provisioning storage cloning and space reclamation settings to storage arrays that supports VAAI XtremIO Storage Array fully supports VAAI This section describes the necessary settings for configuring VAAI for XtremIO storage depending on your vSphere version Refer to Host Configuration for Microsoft Windows for information on disabling VAAT and removing any installed VAAI module EMC XtremlO 147 Array Information and Configuration VAAI setting with vSphere 5 x When using vSphere version 5 x VAAI is enabled by default Therefore no further action is required to ensure that VAAI is used with XtremIO storage VAAI settings with vSphere 4 1 When using vSphere version 4 1 VAAI is
128. ed 6 Click Devices and locate the XtremIO volume in the Details lower pane 7 Right click the XtremIO volume and click Manage Paths to display the Manage Paths window for the XtremIO volume This window lists all the discovered paths to the XtremIO volume 8 Open the Path Selection drop down list and select Round Robin VMware policy 9 Click Change to apply your selection EMC XtremlO Array Information and Configuration 10 Confirm that all listed paths to the XtremIO volume are set with Active I O status as shown in the following screen KtremlO Fibre Channel Disk naa 5 1400cSdcc 200011 Manage Paths Saon o Storage Array Type VMW SATP DEFAULT AA Pate Runtime Nane vmhba2 C0 T4 1 4 vmhba2 C0 T L4 T Target 0 01 43 80 21 3 ce 0f 0 01 43 80 21 03 ce 0 B 0 01 43 00 18 60 0 13 010 1243100 18060150 2 4 vmhba2 C0 T L4 0 01 43 80 09 ae b4 M 0 01 43 80 09 ae b amp ae 4 vmhba2 C0 T7 14 0 01 43 40 12 07 94 b 7 0 01 43 80 12 07 9 b6 4 4 4 4 Active YO Active VO Active VO Active VO Active VO Active UO Active VO vmhba1 C0 T9 14 0 01 43 80 21 63 ce 0d 0 01 43 80 2 Led conte vmhbai1 C0 T L4 0 01 43 84 09 ae b4 ad 0 01 43 80 09 ae bu vmhba1 C0 T7 44 90 01 43 80 12 07 9d b 0 01 43 80 12 07 9 80 Name fc 00 143802 1e07033 00 143802 160 7032 c 00 143802 te 3ce0f 00 143802 be JceOe iaa 140 Sec 200011 Runti
129. el To set up the VMkernel complete the following steps 1 rog cm N Log in to the VMware VI client as administrator From the inventory panel select the server to be configured Click the Configuration tab and click Networking Click Add Networking In Connection Type select VMkernel and click Next The Network Access page displays Either select an existing vSwitch or click Create a virtual switch Note iSCSI traffic should be separated from other network traffic Check the box next to each adapter that you want to connect to the vSwitch EMC Host Connectivity Guide for VMware ESX Server The VMkernel Connection Settings window displays The adapters that you select should appear in the Preview panel as shown in the next figure Add Network Wizard Vothbernel Connection Settings Use Networl labels to identity YMhernel convecoons ede managing your hosts and datacenters 8 Click Next 9 Under Port Group Properties select or enter a network label 10 Under IP Settings enter the adapter IP address and subnet mask for VMkernel 11 Set the VMkernel Default Gateway This must be a valid address 12 Click Next 13 Review the Summary and if all of the settings are correct click Finish 14 For ESX 4 x and ESXi 5 0 only to verify SW iSCSI initiator network has been correctly setup open a console window by issuing the following command vmkping lt target_ip_address gt You should be
130. er This problem can be avoided by connecting to the array in ALUA failover mode 1 Opena web browser and type in the IP address of the VNX series and CLARiiON system your host is attached to 2 Once the GUI is updated with the log in window type in the user name and password for the array Click on the array to expand the listing of properties 4 Clickon theSP that holds the active path for the LUNs The LUNs allocated to that VMware ESX Server host should be listed and should be associated with the host s name and the vmhba device numbers EMC Host Connectivity Guide for VMware ESX Server Configuration For example refer to the LUNs 192 193 196 and 197 owned by host 182bi199 in Figure 28 EMC Unisphere Pool LUN Y PN SJ CF2AS101700059 v IM hb s iosts RR Data Frotectior CF2A5101700059 rag LUNs LUNs Snapshot Mount Points Folders LUNS 7X 0 Y Filter for usage ALL User LUNs Y Folder All Status All v Name lt 1D State User Capacity GB Host Information amp 1 vwx180171 209 Faulted 5 000 SGH28H11 PwrP vmhba2 CO T2 4 E 1 w1n172168163 182 Faulted 4 000 VMM116168 naa 600601603aa02 2 VNX180171 211 Faulted 5 000 ES 2 WIN172168163 250 Faulted 4 000 VMM116168 naa 5006016032202 B 3 180171 212 Faulted 5 000 E 3 W1N172160163 251 Faulted 4 000 VMM116168 naa 600601603aa02 4 vwx180171 213 Faulted 5 000 4 WIN172168163 252 Faulted 4 000 VMM116168 na
131. eries and CLARiiON system and assigned to a Storage Group The virtual machines will be assigned LUNs through the VMware ESX Server Service Console The following are two examples of zoned hosts Example 1 on page 41 Example 2 on page 43 In this example as shown in Figure 5 on page 42 one host with two HBAs is attached to one VNX series or CLARiiON system using two separate switches Two SP ports on each SP within the systems are being used HBAO is zoned to SPAO and to SPB1 HBA1 is zoned to SPA1 and to SPBO The zones should be composed of a single initiator and a single target so they would be created with one HBA and on SP port Two HBAs with two paths each to the systems totals four paths and if using single initiator single target zoning there are four zones In this particular example two switches are used Using only one switch is supported but such a configuration would lack redundancy Preferably a minimum of two switches should be used as this adds another level of redundancy Alternatively for additional redundancy two separate fabrics can be utilized Fibre Channel EA Figure 5 One host two switches and one VNX series or CLARiiON systems EN EMC Host Connectivity Guide for VMware ESX Server Connectivity Example 2 In this example as shown in Figure 6 one host with two HBAs is attached using a two switch fabric to four VNX series or CLARiiON system SPs In this configuration the zones
132. es and functions for ESX Server 3 x Table 2 Useful utilities and functions for VMware ESX Server 3 x Description Introduction to VMware Infrastructure vSphere fdisk Command used to create and manipulate partition tables vmkfstools Command used to create and manipulate files on LUNs owned by the VMware ESX Server host vmkload_mod Command used to view load remove driver modules in the VMkernel vmkdump Command used to manage the VMkernel s dump partition vm support Command used to gather information about the VMware ESX Server itself and virtual machines to assist in debugging issues or to obtain performance information for the virtual machines vmkiscsi tool Command used to configure the iSCSI software and hardware initiators and basic iSCSI management functions esxcfg mpath Command used to list all or specific paths on the system with its detailed information or to specify a specific path for operations esxcfg rescan Command used to rescan HBA or iSCSI initiators and update their status Table 3 describes useful utilities and functions for ESX Server 4 x Table 3 Useful utilities and functions for VMware ESX Server 4 x page 1 of 2 Utility Function Description fdisk Command used to create and manipulate partition tables vmkfstools Command used to create and manipulate files on LUNs owned by the VMware ESX Server host vmkload_mod Command used to view load remov
133. ewp z Intistor Type PLARION Open Iv ArrayCommPath Failover Mode E Unit Serial Number faray PHBA Information vmen o S SOSA m This HBA Belongs to New Host C Existing Host Host Name pas 99 Iss emc com Host 20 00 00 E0 88 09 27 64 21 00 00 E0 8B 09 27 64 Y P Adress 172 23 189 199 PIECE e ds ds Os le elariion Administrator Global f Fa S lel E Applet com emc navisphere qui jnfx INFxMain started i fester Internet p Figure 8 Registering for a failover enabled environment example Because no Naviagent is used on the VMware ESX Server you will receive a warning message when registering the host Figure 9 on page 48 shows an example of the warning message Fibre Channel Connectivity been manually registered It is assumed that there is no host agent present on this host Figure 9 Warning message example Because the Naviagent is not being used this warning is to be expected and is acceptable 7 RepeatStep 1 through Step 4 for each HBA in the VMware ESX Server system 8 To verify that your host has been properly registered right click on the array and go to the Hosts tab The host will be reported as attached but manually registered as in the example of the system named 182bi199 1ss emc comas show in Figure 10 on page 49 EN EMC Host Connectivity Guide for VMware ESX Server Connectivity File View Tools
134. figuration symcfg sid lt sid gt show pool lt poolName gt thin detail symcfg sid 98 show pool TP thin detail Symmetrix ID 000194900098 Symmetrix ID 000194900098 Pool Name TP Pool Type Thin Dev Emulation FBA Dev Configuration 2 Way Mir Pool State Enabled of Devices in Pool g 4 of Enabled Devices in Pool 3 Max Subscription Percent None Enabled Devices 3 Sym Total Alloc Free Full Device Dev Tracks Tracks Tracks State 1900 65544 12 65532 0 Enabled E EMC Host Connectivity Guide for VMware ESX Server Virtual Provisioning 1901 65544 12 65532 0 Enabled 1902 65544 0 65544 0 Enabled Tracks 196632 24 196608 0 Thin Devices 2 Pool Pool Pool Sym Total Subs Allocated Written Dev Tracks Tracks Tracks Status 18FE 165000 84 12 0 0 0 Bound 18FF 165000 84 12 0 0 0 Bound Tracks 330000 168 24 0 0 0 When a thin device is bound to a thin pool by default Enginuity will automatically allocate 12 tracks to each TDEV A pre allocation may be configured to allocate more than 12 tracks to the TDEV Also allocation of tracks is done in a round robin fashion on the TDAT The TDEV can be mapped and masked to an ESX server as a normal LUN Monitoring the thin The thin pool threshold may be monitored by symcli or SMC The pool recommended pool utilization per pool is between 60 and 80 The symcli command to monitor the threshold of a thin pool is as foll
135. figuration Setting up multipathing with all iSCSI adapters on a single vSwitch for ESX 4 x and ESXi 5 0 To set up multipathing with all iSCSI adapters on a single vSwitch for the ESX 4 x and ESXi 5 0 the following steps must be completed Each step is outlined in more detail in this section 1 Add iSCSI adapters to existing vSwitch on page 59 2 Setup 1 1 VMkernel to network adapters mapping on page 61 Figure 15 on page 59 shows an example of a vSwitch configuration E EMC Host Connectivity Guide for VMware ESX Server Figure 15 vSwitch configuration Add iSCSI adapters to existing vSwitch To add iSCSI adapters to an existing vSwitch complete the following steps 1 Log in to the VMware VI client as administrator 2 From the inventory panel select the server to be configured 3 Click the Configuration tab and click Networking 4 In Configuration tab Networking section under the vSwitch that was just created click Properties 5 Select the Network Adapters tab and then click the Add Virtual Switch vSwitch1 Remove Properties VMkemel Port Physical Adapters LJ vMkernel 2 e EB vmnic3 1000 Ful 4 mki 12 10 10 239 ER vmnic2 1000 Ful A YMkemel Port L3 VMkernel e vmkD 11 10 10 239 iSCSI ss The Adapter Selection window displays as shown in the next figure Add Adapter Wizard Adapter Selection New adapters may be taken From a pool of unused ones or transferred From an existing
136. for QLogic QLE8042 CNA is Q84AF100 bin The Boot Code file for QLogic OLE8142 and QLE8152 is Q8010198 bin This file is the combined binary file which includes the binaries for the firmware PXE FCode UEFI and BIOS Execute the command opt QLogic Corporation SANsurferCLI scli and then select options 7 Utilities then 1 Flash update then 1 FCoE Engine and then 1 Update Entire Image The utility prompts to enter a file name Enter the name of the BIOS file listed in Step 3 The utility should report the update has completed successfully Press Enter to continue then select options 0 Return to Main Menu then 12 to exit the SANsurferCLI utility FCoE initiator configurations a Updating the QLogic CEE Menlo firmware FCoE adapters include an additional chip component that requires the latest supported firmware This chip is commonly referred to as a converged enhanced ethernet or Menlo chip the purpose of which is to handle the convergence of storage FC and network IP traffic over a single ethernet interface To update the CEE Menlo firmware on the CNAs 1 Copy the appropriate CEE Menlo firmware file from the CD ROM or the EMC approved section of the QLogic website to the appropriate directory opt OLogic Corporation SANsurferCLI The CEE firmware file for OLogic OLE8042 CNA is mlo fw v1 2 O bin 2 Using the SANsurfer Command Line Interface CLI From the CLI menu select 10 Utilit
137. for VMware ESX Server Data device Thin pool Management tools Virtual Provisioning Thin devices can be mapped and masked prior to a host just like a regular VMAX or Symmetrix device This mapping and masking can be done prior to binding the thin device to a Thin pool When a thin device is bound to a thin pool Enginuity automatically allocate 12 tracks 768 KB to the thin device unless specified A thin device can be replicated using EMC VMAX or Symmetrix local and remote replication products A data device TDAT is a non addressable VMAX or Symmetrix device not visible to host It cannot be mapped masked replicated or configured as metadevices Data devices can be dynamically added to a Thin pool to increase the size of the Thin pool It should also be protected A data device provides physical storage space for thin devices and does not support dynamic sparing A thin pool contains an aggregate amount of space made up from data devices that have been assigned to the pool It can be dynamically grown or reduced by adding or removing the number of data devices Note the following It requires at least one data device to bind a TDEV e Itis required that you use the same protection type for the TDAT in the pool It is recommended that you use the same size for all the TDAT in the pool Itis preferable to categorize thin pools with VMAX or Symmetrix support at a maximum of 510 pools The most efficie
138. for bfa will exist for the storage driver and bna for the network driver EMC Host Connectivity Guide for VMware ESX Server ESXi 5 Server To install the Brocade CNA driver on the VMware ESXi Server 1 Place host in maintenance mode 2 Copy the Brocade driver package to the vMA and untar it 3 Run the install script brocade install esxi sh 4 After the host updates successfully exit from maintenance mode 5 Using the vSphere Client right click ESXi and choose the Exit Maintenance Mode option Reboot the ESXi server 7 After ESXi server reboot is complete run the following command to make sure the drivers are installed vmkload mod 1 The Brocade drivers should now display in the list Install drivers for devices as part of a new ESX installation for ESX only Note This procedure has changed since the 3 5 version of driver CD In ESX 3 5 the driver CD can be used as the boot CD For ESX 4 0 and later you will need the ESX installation DVD to begin For ESX v3 5 Use this CD as a boot CD only when the new driver must enable the target device on which ESX will be installed This procedure can only be used with ESX 3 x 1 Place the driver CD in the CD ROM drive of the host machine 2 Start the host machine 3 When prompted for an upgrade or installation press Enter for graphical mode 4 Choose the language you prefer 5 Select a keyboard type 6 After you are prompted to swap the d
139. forming Access checks cele Allowed Checking Device Reservations eee Allowed Committing configuration changes ern Reordering Creating pools uic tenerae aa Rene S SReci ida mado ecu a a E Done Committing configuration changes ee Committed Terminating the configuration change session Done The configuration change session has successfully completed Virtual Provisioning with VMAX or Symmetrix 213 Virtual Provisioning Step 4 Add data devices to the thin pool Data device are added to the thin pool and enabled It is enabled so that data tracks can be allocated to TDEV A thin pool is enabled upon the adding of TDAT to the pool symconfigure sid lt sid gt cmd add dev lt TDATID gt lt TDATID gt to pool lt poolName gt type thin member_state Enable commit nop symconfigure sid 098 cmd add dev 1900 1902 to pool TP type thin member_state Enable commit nop A Configuration Change operation is in progress Please wait Establishing a configuration change session Established Performing Access checks eee Allowed Checking Device Reservations crases lucum edsa iaa s daa a idnes Allowed HOCKING deviCesaocosancu wo RoE RE E es ey E EIE E wre E SOROR OR Locked Committing configuration changes aeai Reordering Adding pool dewvs rasdksa a3 ca Eb r Ss ei REARS G Done Enabling pool
140. full capacity usage becomes suspended and the following message is displayed 9 Virtual Machine Message y There is no more space for virtual disk CentOs6 Thick STUN Hhybrid 5 NOISY vmdk You might be able to continue this session by freeing dsk space on the relevant volume and didung button retry Click button abort to terminate this session Retry C Cancel Lx At this point the VMware administrator can resolve the out of space situation on the XtremIO cluster and prevent the guest OS in the VMs from crushing EMC XtremlO 165 Array Information and Configuration EC EMC Host Connectivity Guide for VMware ESX Server 5 Multipathing in VMware ESX ESXi Server This chapter contains installation information for the VMware ESX ESXi Server with VMAX Symmetrix VNX series and CLARiiON systems including divis e M 9 168 Multipathing in VMware ESX Server with VMAX or Symmetrix 170 Multipathing in VMware ESX 3 x with CLARION 171 Native multipathing in VMware ESX ESXi 4 x and ESXi 5 x 172 PowerPath VE for VMware ESX ESXi 4 x and ESXi 5 x 177 Multipathing in VMware ESX ESXi Server 167 Multipathing in VMware ESX ESXi Server Overview Path policies Fixed Most Recently Used MRU Round Robin RR Multipathing allows continuous access to the SAN in event of device or path failur
141. g a different OS such as Windows Cisco UCS FNIC To adjust HBA I O throttle of the Cisco UCS FNIC HBA complete the following steps EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration 1 In the UCSM navigation tree click the Servers tab 2 Inthe navigation tree expand the Policies and Adapter Policies Click the FC Adapter Policy Linux or FC Adapter Policy VMWare 4 Inthe main window expand the Options drop down on Configure the I O Throttle Count field to 1024 6 Click Save Changes vSphere 5 x To adjust the HBA queue depth on a host running vSphere 5 x complete the following steps 1 Open an SSH session to the host as root EMC XtremlO 153 Array Information and Configuration 2 To verify which HBA module is currently loaded use one of the following commands HBA Vendor Command QLogic esxcli system module list grep ql Emulex esxcli system module list grep Ipfc Cisco UCS FNIC esxcli system module list grep fnic The following shows an example of a host with a OLogic HBA esxcli system module list grep ql all interface qla2xxx true true 3 To adjust the HBA queue depth use one of the following commands HBA Vendor Command OLogic esxcli system module parameters set p ql2xmaxqdepth 256 m qla2xxx Emulex esxcli system module parameters set p Ilpfc 0 lun queue depth 256 m Ipfc82
142. ge space in vSphere 5 1 Use the following command to reclaim space up to a certain percentage from the specified datastore vinkfstools y lt percentage gt For example ttvmkfstools y 99 In the example the command attempts to reclaim 99 of the datastore capacity Note In vSphere 5 1 the space reclaim command cannot claim back space larger than 2 TB To avoid this issue use a 2 TB datastore or issue multiple space reclaim commands with a lower percentage parameter value For example for a 4 TB datastore issue vmkfstools y 50 to reclaim 50 percent of the space and then repeat the command to reclaim the other 50 percent of the datastore To reclaim storage space in vSphere 5 5 164 EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration Use the following UNMAP command to reclaim space from a specified datastore esxcli storage vmfs unmap l1 lt datastore gt In this case the UNMAP command attempts to reclaim the maximum space possible that is the reclaimed space can be larger than 2 TB Out of space VM TPSTUN is a VAAI primitive that enables the array to notify vSphere suspend and when a LUN is running out of space due to thin provisioning notification with Thin over commit The command causes suspending all the virtual Provisioning TPSTUN machines on that LUN XtremIO supports this VAAI primitive A virtual machine provisioned on a LUN that is nearing
143. ges and var log vmkernel on the ESX Server and var log messages on the Linux VM In the test the dd command is used to generate I O on the thin LUN to simulate writing large files to the thin LUN dd if dev zero bs 1048576k count 1024 of large tmp This example command creates a file called large tmp The block size is 1048576KB 1 GB and the count is 1024 The generated file size should be 1 TB 202 EMC Host Connectivity Guide for VMware ESX Server Virtual Provisioning When the free physical capacity has been filled VNX series and CLARiiON will give a critical alert such as the following All Systems gt Dashboard Systems by Severity q SG SADL CX4 40 1 04 1e x Critical 1 1 Click the alert to display the details In this example the display shows that the Storage Pool is full and capacity needs to be increased Severity Critical System SADL CX4 40 1 04 1e Domain Local Created Jun 22 2010 4 07 49 AM Message Storage Pool Thin Pool 1 is 100 full Full Description Server applications have filled up 100 of Storage Pool Thin Pool 1 Recommended Action You may want to increase the Storage Pool s capacity by adding more disk drives to it Event Code 0x7600 2 From the Storage menu at the top of the main screen for your array in Unisphere go to Storage gt Pools RAID Groups This takes you to the Pools tab of the Storage menu The following screen shows that Thin_P
144. he ESX host and log into an account with administrator capability Place the driver CD in the CD ROM drive of the ESX host Mount the driver CD Navigate to cd mount point offline bundle and locate the lt offline bundle gt zip file EMC Host Connectivity Guide for VMware ESX Server 5 Run the esxupdate command to install drivers using the offline bundle esxupdate bundle lt offline bundle gt zip update Update or add drivers on existing ESX and ESXi installations using vihostupdate for both ESX and ESXi Prerequisite Before you can update or patch an ESX ESXi host from the command line you must have access to a machine on which you can run the VMware vSphere Command Line Interface vSphere CLI You can install the vSphere CLI on your Microsoft Windows or Linux system or import the VMware vSphere Management Assistant vMA virtual appliance onto your ESX ESXi host To update or add drivers on existing ESX and ESXi installations using the vihostupdate command 1 Power on the ESX or ESXi host 2 Place the driver CD in the CD ROM drive of the host where either the vSphere CLI package is installed or vMA is hosted 3 Mount the driver CD 4 Navigate to cd mount point offline bundle and locate the lt offline bundle gt zip file 5 Runthe vihostupdate command to install drivers using the offline bundle vihostupdate server lt test machine address gt install bundle lt offline bundle gt zip
145. he VMware ESX Server Using EMC CLARiiON Storage Systems Solutions Guide Section 3 8 4 and the VMware ESX Server Using EMC Symmetrix Storage Systems Solutions Guide Section 3 9 4 Both documents are available on http support EMC com Volume alignment 223 Virtual Machine File System Version history There are three versions of VMFS Table 9 includes the latest entire VMFS file system version with their corresponding ESX Server versions a summary of each VMFS version properties and limitations and their official names Table 9 VMFS ESX versions VMFS ESX Server Summary VMFS2 3 x limited A flat file system with no directory structure 4 x limited e Readable but not writable for ESX 3 x VMFS3 3 x Anew file system with directory structure implemented 4x e Not compatible with older version of ESX Server 5 Where VM configuration file is stored by default e Compatible with ESXi 5 VMFS5 5 Support of greater than 2TB disk size for RDMs in physical compatibility mode Ability to reclaim physical storage space on thin provisioned storage devices Default block size is 1 MB Upgraded VMFS5 volume will inherit the VMFS3 block size value EMC Host Connectivity Guide for VMware ESX Server Size limits The maximum extent file size allowed by both VMFS2 and VMFS3 file system is 2048 GB 2 TB VMFS5 supports greater than 2 TB storage devices for each VMFS extent Details about the storage
146. he paths initially used by the VMware ESX Server hosts to access the LUNs The MRU policy will choose the first available path that avoids performing a trespass The trespass back to the original path will need to be performed manually using the Unisphere Navisphere Manager or by manually initiating failover failback to a particular path using the Enable Disable options in VI Client Multipathing in VMware ESX 3 x with CLARIION 171 Multipathing in VMware ESX ESXi Server 172 Native multipathing in VMware ESX ESXi 4 x and ESXi 5 x VMware ESX Server provides support for native multipathing software as a feature of the OS Native multipathing software is supported with VMAX Symmetrix VNX series and CLARiiON systems and is meant for failover only Dynamic load balancing is supported in ESX ESXi 4 x and ESXi 5 x by PowerPath VE VMAX or Symmetrix policy In VMAX or Symmetrix Fixed path policy is used and allows you to manually configure the path to be used for single preferred path VMware ESX ESXi 4 x and ESXi 5 x has similar multipathing and failover functionality as VMware ESX Server 3 x Additionally VMware ESX ESXi 4 x and ESXi 5 x can support PowerPath and Round Robin policy VNX series and CLARiiON policy Support for Asymmetric Logical Unit Access ALUA sometimes called Asymmetric Active Active AAA is new in VMware ESX Server v4 0 Note For more information refer to ALUA failover mode behavi
147. he switch maintains a table of all logged in devices so an N_Port may adjust automatically to changes in the fabric address by keying off the WWPN The highest level of login that occurs is the process login This is used to establish connectivity between the upper level protocols on the nodes An example is the login process that occurs at the SCSI FCP level between the HBA and the VMAX or Symmetrix system The VMAX or Symmetrix director extracts the SCSI Command Descriptor Blocks CDB from the frames received through the Fibre Channel link Standard SCSI 3 protocol is used to determine the addressing mode and to address specific devices The VMAX or Symmetrix supports three addressing methods based on a single layer hierarchy as defined by the SCSI 3 Controller Commands SCC Peripheral Device Addressing Logical Unit Addressing Volume Set Addressing All three methods use the first two bytes 0 and 1 of the 8 byte LUN addressing structure The remaining six bytes are set to Os VMAX and Symmetrix array configurations Array Information and Configuration Array Information and Configuration For Logical Unit and Volume Set addressing the VMAX or Symmetrix port identifies itself as an Array Controller in response to a host s Inquiry command sent to LUN 00 This identification is done by returning the byte OxOC in the Peripheral Device Type field of the returned data for Inquiry If the VMAX or Symmetrix system retu
148. host system powered off install the CNA and make the necessary cable connections as instructed in the accompanying CNA documentation or on the OLogic website at www QLogic com The card installs into a single PCI bus slot 4 Reapply power and allow the system to boot normally Updating the QLogic CNA boot BIOS To update the OLogic CNA boot BIOS 1 If you do not already have the QLogic SANsurfer FC HBA CLI package installed on the ESX Server download the latest available version compatible with the ESX Server version that you are running from www QLogic com EN EMC Host Connectivity Guide for VMware ESX Server Note If the ESX Server has a opt QLogic_Corporation SANsurferCLI directory the QLogic SANsurfer FC HBA CLI package is already installed To install the HBAnyware CLI on the ESX Server a Log in as root at the ESX Server console or in a secure shell session to the ESX Server b Copy the scli lt kit version gt rpm gz file to a directory on the install machine c Using the cd command change to the directory where you copied the rpm gz file d Unzip the file Type gunzip scli lt kit version gt rpm gz e Install the rpm Type rpm i scli lt kit version gt rpm The rpm contents are installed in opt OLogic Corporation SANsurferCLI The scli utility is also located in this directory Copy the appropriate BIOS file from the CD ROM or the OLogic website to the ESX Server The BIOS file
149. hysical storage but is assigned to the server in a capacity on demand fashion from a shared storage pool The storage administrator monitors and replenishes each storage pool not each LUN Host Application 2 Reported Capacity Allocated Application 3 locate Figure 40 Virtual thin provisioning With traditional provisioning the Storage Administrator is responsible for monitoring storage usage and working with business people who forecast the amount of storage they will require Whereas with Virtual Provisioning the storage administrator can configure more storage to a server than is actually allocated on the storage system The administrator then monitors the actual usage of physical storage and adds disk drives to the pool as required 188 EMC Host Connectivity Guide for VMware ESX Server Virtual Provisioning Monitoring adding and deleting pool capacity Figure 41 shows the following types of pool capacity Usable pool capacity is the total physical capacity available to all LUNs in the Pool e Allocated capacity is the total physical capacity currently assigned to all Thin LUNs e Subscribed capacity is the total host reported capacity supported by the pool The value for Full Threshold Allocated Capacity Usable Pool Capacity is used to trigger an alert The initial alert value is user settable Increasingly serious alerts will be sent on each successive 5 increment The last two built
150. ies When you select FCoE Utility from the Utilities menu SANsurfer FC HBA CLI prompts you to select an FCoE Engine and then provides the following menu for that engine FCoE Utilities Menu FCoE Engine QLE8042 Desc QLE8042 Mercury CNA 1 Update Firmware 2 Stats 3 Logs 4 Return to Previous Menu 3 To update FCoE Engine firmware select option 1 from the Utilities menu 10 Utilities gt n FCoE Utility gt 1 4 To update firmware using the command line option enter the following command scli fcoe lt hba instance gt lt hba wwpn gt loadfw lt firmware file gt Installing the QLogic CNA driver Using the QLogic adapter with the VMware ESX Server requires CNA driver software Refer to the latest EMC Support Matrix for supported driver versions EMC Host Connectivity Guide for VMware ESX Server To install the QLogic CNA driver on the VMware ESX Server Download the driver RPM package from http www QLogic com Following the installation of the proper driver for the FCoE adapter the Fibre Channel interface will function identically to that of a standard QLogic Fibre Channel HBA as the QLE8042 simply encapsulates Fibre Channel traffic within ethernet frames If the driver package is an iso file burn the iso file into a CD You can use the driver CD in several ways each further discussed in this section Install drivers for devices as part of a new ESX installation for ESX only on page 8
151. in thresholds will upgrade the alert severity to Critical Over Subscribed Subscribed Capacity Capacity Allocated Capacity n ocate apaci Full Threshold P n Figure 41 Pool Full Threshold Adding drives to the pool non disruptively increases available usable pool capacity for all attached LUNs Allocated capacity is reclaimed by the pool when LUNS are deleted There is no need to defraud Reclaimed space is filled with zeroes and reused Reserved capacity as shown in Figure 42 on page 190 can be set at the LUN level if desired Consumed capacity Allocated capacity Reserved capacity Monitoring adding and deleting pool capacity 189 Virtual Provisioning Reserved capacity can be guaranteed to one or more thin LUNs but is not allocated It is reserved space that is not available for use by other LUNs in the thin pool Reserved capacity can be reclaimed non disruptively Available Shared Capacity Reserved capacity pom Consumed Capacity Allocated Capacity Capacity Figure 42 Reserved capacity EMC Host Connectivity Guide for VMware ESX Server Virtual LUN architecture and features Figure 43 Specialized software known as the Mapping Service manages the placement and use of the physical storage used by Virtual LUNs Data is written to 8K chunks extents and is densely packed This makes configuring Virtual LUNs easy because the underlying software makes all the decisions abo
152. ings For any other configuration the VCM ACLX bit can be either set or not set Note When setting up a VPLEX attach version 4 x or earlier with a VNX series or CLARiiON system the initiator type must be set to CLARiiON Open and the Failover Mode set to 1 ALUA is not supported When setting up a VPLEX attach version 5 0 or later with a VNX series or CLARiiON system the initiator type can be set to CLARiiON Open and the Failover Mode set to 1 or Failover Mode 4 since ALUA is supported If you are using the LUN masking you will need to set the VCM ACLX flag If sharing array directors with hosts which require conflicting flag settings VCM ACLX must be used Note The FA bit settings listed in Table 7 are for connectivity of VPLEX to EMC VMAX or Symmetrix arrays only For host to EMC VMAX or Symmetrix FA bit settings please refer to the EMC Support Matrix Supported storage arrays The EMC VPLEX Simple Support Matrix lists the storage arrays that have been qualified for use with VPLEX Refer to the VPLEX Procedure Generator available on http support EMC comk to verify supported storage arrays EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration VPLEX automatically discovers storage arrays that are connected to the back end ports All arrays connected to each director in the cluster are listed in the storage array view Initiator settings on back end arrays Refer
153. itiator EMC VPLEX 133 Array Information and Configuration groups all initiators in such a group share the same view on the exported storage they can see the same volumes by the same LUN numbers on the same WWNs An initiator must be registered with VPLEX to see any exported storage The initiator must also be able to communicate with the front end ports over a Fibre Channel switch fabric Direct connect is not supported Registering an initiator attaches a meaningful name to the WWN typically the server s DNS name This allows you to audit the storage view settings to determine which virtual volumes a specific server can access This section provides information on how to export virtual volumes 134 EMC Host Connectivity Guide for VMware ESX Server View Favorites v Distributed Storage Distributed Devices Intiators Bots v o Virtualized Storage WS Virtual Volumes D Devices G Extents v C3 Physical Storage Storage Volumes _J Storage Arrays v Bicuster 2 y 3 Hosts T Storage Views Intiators pots v Co Virtualized Storage K Virtual Volumes Devices Extents v d Physical Storage Storage Volumes Storage Arrays Tools Help Qa h 2 CH see erm R Create Storage View v 2 Select Ports Array Information and Configuration Exporting virtual volumes consists of the following tasks 1 Creating a storage view as shown in Figure 34
154. ld be carried out on each ESX host connected to XtremIO cluster through either the vSphere Client or the ESX command line To adjust ESX host parameters for XtremIO Storage use one of the following procedures explained further in this section Using the vSphere client GUI on page 145 Using the ESX host command line for vSphere 4 1 5 0 and 5 1 on page 146 Using the ESX host command line for vSphere 5 5 on page 146 Using the vSphere client GUI 1 Launch the vSphere client and navigate to Inventory gt Hosts and Clusters 2 In the Inventory section locate the ESX host EMC XtremlO 145 Array Information and Configuration 3 Click the ESX host icon in the left pane and click the Configuration tab 4 From the Software section click Advanced Settings 5 Click the Disk section in the Advanced Settings pane 6 Navigate to the Disk SchedNumReqOutstanding parameter and change it to its maximum value 256 Note Do not apply this step in a vSphere 5 5 host where the parameter is set on a specific volume using ESX command line 7 Navigate to the Disk SchedQuantum parameter and change it to its maximum value 64 8 Navigate to the Disk DiskMaxIOSize parameter and change it to 4096 9 Click OK to apply the changes Using the ESX host command line for vSphere 4 1 5 0 and 5 1 1 Openan SSH session to the host as root 2 Run the following commands to set the SchedQuantu
155. le class Filter 9 Usethe following command to verify that the correct devices are claimed esxcli vaai device list 10 Use the following command to view the device properties esxcli corestorage device list Enabling VAAI VAAI in ESX can be disabled Therefore before using the XtremIO features Storage you should confirm that VAAI features are enabled on the ESX host To confirm that VAAI is enabled on the ESX server 1 Launch the vSphere Client and navigate to Inventory gt Hosts and Clusters 2 In the Inventory section locate the ESX host 3 Click the ESX host icon in the left pane and click the Configuration tab 4 Inthe Software section click Advanced Settings 5 Select the DataMover section 6 Verify that the following parameters are enabled that is both are set to 1 e DataMover HardwareAcceleratedMove e DataMover HardwareAcceleratedInit 7 Select the VMFSS3 section EMC XtremlO ELE Array Information and Configuration 8 Verify that the VMFS3 HardwareAcceleratedLocking parameter is enabled that is set to 1 MESS FadvoksmeOpentt450 rf Fal VMFS volume opes operation f the endertying device is deemed to be under an all paths doun con Me 0 Max i Enable hordwore accelerated VMFS locking requires comphant hardware Me o Mex 1 WMFS3 Maxieap Sae Me a Macon sce in MB to wich the VMFS hesp is allowed to grow Me 35 Mac 128 Enable hardware ac
156. lowing steps 1 Open the vSphere CLI recommended or the service console 2 Run the following command e For ESXi 5 x esxcli storage nmp satp set default psp VMW_PSP_RR satp VMW_SATP_INV e For ESXi ESX 4 x esxcli nmp satp setdefaultpsp psp VMW PSP RR satp VMW SATP INV For optimal performance with VPLEX set the multipath Round Robin policy for the I O operation limit to a value of 1000 Currently the VMware default value for this setting is 1000 1 Open the vSphere CLI recommended or the service console 2 Run the following commands e For ESXi 5 x To check the I O operations limit esxcli storage nmp psp roundrobin deviceconfig get device device NAA To set the I O operations limit esxcli storage nmp psp roundrobin deviceconfig set device device NAA iops 1000 type iops e For ESXi ESX 4 x To check the I O operations limit esxcli nmp roundrobin getconfig device device_NAA To set the I O operations limit esxcli nmp roundrobin setconfig device device NAA iops 1000 type iops Additional information For information on VMware and VPLEX versions recommended settings refer to the VMware Compatibility Guide for EMC VPLEX at http www vmware com resources compatibility detail php devi ceCategory san amp productid 12738 amp deviceCategory san amp keyword vplex amp isSVA 1 amp page 1 amp display_interval 10 amp sortColumn Partner amp sortOrder Asc 174 EMC Host Connectivi
157. m SchedNumReqOutstanding and DiskMaxlIOSize parameters respectively esxcfg advcfg s 64 Disk SchedOuantum esxcfg advcfg s 256 Disk SchedNumReqOutstanding esxcfg advcfg s 4096 Disk DiskMaxIOSize Using the ESX host command line for vSphere 5 5 1 Open an SSH session to the host as root 2 Run the following commands to set the SchedQuantum and DiskMaxlIOSGize parameters respectively esxcfg advcfg s 64 Disk SchedQuantum esxcfg advcfg s 4096 Disk DiskMaxIOSize 3 Run the following command to obtain the NAA for XtremIO LUNs presented to the ESX host and locate the NAA of the XtremIO volume esxcli storage nmp path list grep XtremIO B1 a EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration 4 Run the following command to set SchedNumReqOutstanding for the device to its maximum value 256 esxcli storage core device set d naa xxx O 256 vCenter server The maximum number of concurrent full cloning operations should parameter settings be adjusted based on the XtremIO cluster size The vCenter Server parameter config vpxd ResourceManager maxCostPerHost determines the maximum number of concurrent full clone operations allowed default value is 8 Adjusting the parameter should be based on the XtremIO cluster size as follows One X Brick 8 concurrent full clone operations e Two X Bricks 16 concurrent full clone operations Four X Bricks 32 concurrent full clon
158. me Name wD DCO TAA Fibre Channel Adapter 0 01 43 80 21 00 70 33 00 1 43 80 21 60 9 32 Target 0 01 43 80 2 Lie ce 0f 000 1 43 80 21 6 eee Cm o Configuring vSphere NMP Round Robin on XtremlO volume in ESX host using ESX command line To configure vSphere NMP Round Robin on an XtremIO volume in an ESX host using ESX command line 1 Open an SSH session to the host as root 2 Run the following command to obtain the NAA of XtremIO LUNs presented to the ESX host esxcli storage nmp path list grep XtremIO B1 3 Useone of the following commands to modify the path selection policy on the XtremIO volume to round robin Version Command vSphere 5 x esxcli storage nmp device set device naa id psp VMW PSP RR Er EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration vSphere 4 1 esxcli nmp device setpolicy device lt naa_id gt psp VMW_PSP_RR The following is an example for vSphere 5 1 esxcli storage nmp device set device naa 514f0c5e3ca0000e psp VMW PSP RR 4 Usethe following command to set v5phere NMP round robin path switching frequency for an XtremIO volume to 1 I O packet Version Command vSphere 5 x esxcli storage nmp psp roundrobin deviceconfig set d naa devicename iops 1 type iops vSphere 4 1 esxcli nmp roundrobin setconfig device naa devicename iops 1 Configuring vSphere NMP
159. mmetrix device that is to contain the Master Boot Record MBR for the host must have a logical unit number LUN between the range of 0 127 and must have a LUN number different than any other device visible to the host Note that this includes the case of the VMAX or Symmetrix Volume Logix database device The Volume Logix device is write protected so the installer will fail to write the MBR to this device To force the installer to avoid an attempt to write to this device EMC recommends masking the LUN The administrative host must first initialize the Volume Logix database The LUN may then be masked by modifying the active configuration file to enable the fba flag2 to restrict access to the Volume Logix database e Analternative to masking the VCM DB Volume Logix Database is to map the Volume Logix database so that it is the highest LUN presented to the host However the LUN number should not be later than 254 FE if it is to be used by Solutions Enabler or EMC Ionix ControlCenter running on a Microsoft host Note The EMC recommended method is to use LUN masking When attaching the VMware host to the VMAX or Symmetrix storage array use the adapter in the lowest numbered PCI slot in the server To keep the configuration and installation simple it is recommended that only that HBA be cabled to the array Ensure that the boot BIOS or firmware has been applied to your HBA and that the boot BIOS has been enabled on the HBA to
160. models repeat Step 6a for each adapter port WWN reported in Step 4 to ensure that all adapter instances within the host have been updated Using OneCommand Manager To update the Emulex CNA firmware and boot BIOS for OneConnect adapters use OneCommand Manager Ifyou do not already have the Emulex One Command Manager Application kit installed on the ESX Server download the latest available Emulex OneCommand Manager Application kit that is compatible with the ESX Server version that you are running from www emulex com To install the OneCommand Manager application CLI on a new system install the specific rpm file for the driver for your VMware version Prerequisites OneCommand Manager requires the LPFC driver version 7 4 or later to be loaded Installing the driver is described in the next section FCoE initiator configurations Note the following In band management FC based management is not supported on VMware ESX 3 5 servers Use out of band management TCP IP based management to remotely manage adapters on ESX 3 5 servers For VMware ESX 3 5 servers the firewall on the ESX Server must be opened to manage systems remotely using TCP IP based management To enable TCP port 23333 run the following commands esxcfg firewall openPort 23333 tcp in OneCommand esxcfg firewall openPort 23333 tcp out OneCommand To verify that the correct port is open run the following command esxcfg firewall q Th
161. n VMware ESX ESXi Server 170 Multipathing in VMware ESX Server with VMAX or Symmetrix Fixed path policy is used in the VMAX or Symmetrix and allows you to manually configure the path to be used If the preferred path fails the fixed path policy will transfer I O to the live path until the preferred path is restored When the preferred path is restored all I O will be transferred back to the preferred path Note To prevent the potential loss of access to disks different VMAX or Symmetrix hyper volumes should not be presented to same LUN address When a cable is pulled I O from the VMware ESX Server and the virtual machines essentially will be frozen for approximately a minute The Fibre Channel HBA driver within the VMware ESX Server must identify that the link is down before a failover will occur During the time that the link is down the virtual machines will not be responsive but they should resume to their normal behavior once the failover has occurred In the event that all paths to the storage have failed then I O errors will be reported by the virtual machines Access to the vmfs directory will also fail The VMware ESX host automatically sets the multipathing policy upon the availability of the array it detects The default array is Active Active if no array can be detected by the ESX host If the VMware ESX Server is using the native failover functionality within the kernel the default setting of the failo
162. n page 138 Add virtual volumes to storage view VPLEX ports on page 139 Initiators on page 139 Viewing the World Wide Name for an HBA port Each HBA port has a World Wide Name WWN associated with it WWNS are unique identifiers that the VPLEX engine uses to identify its ports and Host Initiators VPLEX ports Initiators Array Information and Configuration You can uses one of the following ways to view WWNs e Switch s name server output EMC ControlCenter or Solution Enabler syminq command VMAX or Symmetrix users The virtual volumes created on a device are not visible to hosts initiators until you export them Virtual volumes are exported to a host through front end ports on the VPLEX directors and HBA ports on the host server For failover purposes two or more front end VPLEX ports can be used to export the same volumes Typically to provide maximum redundancy a storage view will have two VPLEX ports assigned to it preferably from two different VPLEX directors When volumes are added to a view they are exported on all VPLEX ports in the view using the same LUN numbers For an initiator to see the virtual volumes in a storage view it must be registered and included in the storage view s registered initiator list The initiator must also be able to communicate with the front end ports over Fibre Channel connections through a fabric A volume is exported to an initiator as a LUN on one or more front e
163. nd port WWNs Typically initiators are grouped so that all initiators in a group share the same view of the exported storage they can see the same volumes by the same LUN numbers on the same WWN host types Ensure that you specify the correct host type in the Host Type column as this attribute cannot be changed in the Initiator Properties dialog box once the registration is complete To change the host type after registration you must unregister the initiator and then register it again using the correct host type EMC VPLEX 139 Array Information and Configuration VPLEX supports the host types listed in Table 8 When initiators are registered you can set their type also indicated in Table 8 Table 8 Supported hosts and initiator types Host Initiator type Windows MSCS Linux default AIX Aix HP UX Hp ux Solaris VCS Sun vcs 140 EMC Host Connectivity Guide for VMware ESX Server Configuring VMware ESX hosts to recognize VPLEX volumes VMware ESX in box driver will automatically recognize the volumes after LUN masking is done properly EMC XtremlO Best practices for zoning and subnetting Array Information and Configuration To optimize performance hosts accessing the EMC XtremIO Storage Array may require configuring not only to the XtremIO cluster but also the host itself This section describes the necessary procedures for optimally configuring the host for Xt
164. ne New symdevs 18FE 18FF Terminating the configuration change session Done The configuration change session has successfully completed 2 Verify that the TDEV has been created and unbounded symdev list sid 098 TDEV unbound Symmetrix ID 000194900098 Device Name Directors Device Cap Sym Physical SA P DA IT Config Attribute Sts MB 18FE Not Visible 0 7 NA NA TDEV N Grp d NR 10313 18FF Not Visible 7 NA NA TDEV N Grp d NR 10313 A newly created TDEV will be in a Not Ready NR state until it is bound to a thin pool The back end directors DA IT will show NA NA because no backend devices has been allocated to the TDEV 2 Create data devices TDAT 1 Four data devices each 4 GB in size is created One additional parameter attribute datadev is required for making TDAT The available protection type for TDAT are two way Mirror RAID 5 and RAID 6 symconfigure sid sid cmd create dev count lt numberOfTDAT gt attribute datadev size lt n gt MB GB CYLS emulation FBA config 2 way Mir RAID 5 RAID 6 gt commit nop symconfigure sid 098 cmd create dev count 4 attribute datadev size 4GB emulation FBA config 2 way mir commit nop A Configuration Change operation is in progress Please wait Establishing a configuration change session Established Processing symmetrix 000194900098 Performing Access checkS 22i piss hla epa ius Allowe
165. nfigurations The utility will report Download Complete when the BIOS download has completed successfully 4 If the installed adapter is a dual ported model then the CEE Menlo firmware does not need to be downloaded to each adapter port downloading it to either port results in both ports being updated with the latest BIOS Otherwise for multiple single port adapter models repeat Step 3 for each adapter port WWN reported in Step 2 to ensure that all adapter instances within the host have been updated CEE firmware can also be upgraded using the OneCommand Manager Note The following command is supported for LP21000 series adapters only This command is not supported for OneConnect adapters hbacmd CEEDownload WWPN Filename where WWPN is the World Wide Port Name of the adapter and Filename is the ame of the CEE firmware file Installing the Emulex CNA driver Using the Emulex adapter with the VMware ESX Server requires CNA driver software Refer to the latest EMC Support Matrix for supported driver versions This section explains how to Install the Emulex CNA driver on the VMware ESX Server on page 77 Install drivers for devices as part of a new ESX installation for ESX only on page 77 Update or add drivers on existing ESX installations using ESX update for ESX only on page 78 Update or add drivers on existing ESX and ESXi installations using vihostupdate for both ESX and ESXi
166. ng commands HBA Vendor Command QLogic esxcfg module s ql2xmaxqdepth 256 qla2xxx Emulex esxcfg module s Ipfc0_lun_queue_depth 256 Ipfc820 Note Use the appropriate module name as received in the output of step 2 for example Ipfc820 or qla2xxx EMC XtremlO 155 Array Information and Configuration 4 Reboot the ESX host 5 Connect to the ESX host shell as root 6 Confirm that queue depth adjustments are applied using the following command esxcfg module g lt driver gt Note When using the command replace lt driver gt with the module name as received in the output of step 2 for example lpfc820 or qla2xxx The following is an example of a host with a QLogic HBA and queue depth set to 256 esxcfc module g qla2xxx all interface qla2xxx enabled 1 options ql2xmaxqdepth 25 Configuring multipath software Note You can use EMC Virtual Storage Integrator VSI Path Management to configure path management across EMC platforms including XtremIO Refer to the EMC VSI Path Management Product Guide for further information on using this vSphere Client plug in Configuring vSphere XtremIO supports the VMware vSphere Native Multipathing NMP Native Multipathing technology This section describes the procedure required for configuring native vSphere multipathing for XtremIO volumes For best performance it is recommended that you Set the
167. nge the parameters of the HBAs use utilities like SAN Surfer for OLogic and HBAnywhere for Emulex Both offer the graphical interface and command line interface Note EMC supports fabric boot with VMware ESX Server v2 5 x and later using the OLogic or Emulex HBAs Virtual Machines are recommended to boot over the fabric iSCSI card EMC supports both the hardware OLogic iSCSI HBA and the software generic NIC iSCSI HBA in conjunction with EMC iSCSI storage arrays on the VMware platform Initiator setting and installation EN Installation of ESX ESXi Server Note This example uses QLogic HBAs The card names are slightly different for other brands All hardware iSCSI HBAs have 1 Gb s throughput rate Cards may have single ports or dual ports To check the parameters issue the following command cat proc scsi qlaxxx N where N is the sequential value of each HBA installed in the system beginning with the number after the last host adapter number entry in the file The parameters contain useful information of the initiator the major information including iSCSI Model Number Driver Version Firmware Version IP specification ION of initiators ION of targets The parameters can be changed through OLogic SAN Surfer management utility 9 9 9 o Converged Network Adapter CNA EMC is now supporting Fibre Channel over Ethernet FCoE Converged Network Adapter CNA offerings with VMware E
168. nt use of pool resources is achieved by using a minimal number of pools so that the capacity can be shared Configuring replicating managing and monitoring thin devices and thin pools involves the same tools and the same or similar functions as those used to manage traditional arrays Use Symmetrix Management Console or Solutions Enabler to configure and manage Virtual Provisioning Virtual Provisioning with VMAX or Symmetrix 209 Virtual Provisioning Virtual Provisioning on EMC VMAX or Symmetrix Note The VMAX3 Family includes VMAX 400K VMAX 200K and VMAX 100K The VMAX Family includes VMAX 40K VMAX 20K VMAX VMAX 10K Systems with SN xxx987xxxx VMAX 10K Systems with SN xxx959xxxx and VMAXe This section explains Virtual Provisioning on the following VMAX 400K with HYPERMAX 5977 250 189 or later VMAX 200K with HYPERMAX 5977 250 189 or later VMAX 100K with HYPERMAX 5977 250 189 or later VMAX 40K with Enginuity 5876 with ESX 4 1 or later VMAX 20K with Enginuity 5876 with ESX 4 1 or later VMAX with Enginuity 5874 or later with ESX 4 1 or later VMAX 10K Systems with SN xxx987xxxx with Enginuity 5876 with ESX 4 1 or later e VMAX 10K Systems with SN xxx959xxxx with Enginuity 5876 with ESX 4 1 or later e VMAXe with Enginuity 5875 or later with ESX 4 1 or later 9 9 9 Note For supported Solutions Enabler and corresponding Enginuity versions refer to the appropriate EMC Simple
169. ntel Ethernet Server Adapter with the VMware ESX server including Installing the Intel Ethernet Server Adapter on page 92 Configuring FCoE for Intel Ethernet Server Adapter on VMware ESXi 5 on page 92 Installing the Intel Ethernet Server Adapter To install the adapter complete the following steps 1 Install VMware ESXi 5 or later on your server For instructions on installing the operating system refer to the relevant documentation at http www VMware com support pubs 2 Power off the host system 3 With the host system powered off install the Intel Server Adapter and make the necessary cable connections as discussed on the Intel website at http www Intel com The card installs into a single PCI bus slot 4 Reapply power and allow the system to boot normally Configuring FCoE for Intel Ethernet Server Adapter on VMware ESXi 5 For instructions on configuring FCoE for the VMware ESX server refer to Configuring FCoE for VMware ESX at http www intel com content dam doc guide configuring FCoE with ESX v1 0 pdf EMC Host Connectivity Guide for VMware ESX Server 4 Array Information and Configuration This chapter provides information about VMAX Symmetrix VNX series and CLARiiON systems support information specific to the VMware ESX Server environment VMAX and Symmetrix array configurations 94 ESX host in the VNX series and CLARiiON environment 106 EMC VPLEX ces B
170. o create the distributed device detach during communication problems allowing I O to resume at the cluster For more information on creating and applying rule sets refer to the EMC VPLEX CLI Guide available on http support EMC com There are cases in which all I O must be suspended resulting in a data unavailability VPLEX with GeoSynchrony 5 0 is introduces the new functionality of VPLEX Witness EMC VPLEX 129 Array Information and Configuration 130 Virtual volumes System volumes Metadata volumes When a VPLEX Metro or a VPLEX Geo configuration is augmented by VPLEX Witness the resulting configuration provides the following features High availability for applications in a VPLEX Metro configuration no single points of storage failure Fully automatic failure handling in a VPLEX Metro configuration Significantly improved failure handling in a VPLEX Geo configuration Better resource utilization IMPORTANT VMWare does not currently support Geo Always consult the EMC Support Matrix ESM for the most up to date support information For information on VPLEX Witness refer to the EMC VPLEX with GeoSynchrony 5 0 Product Guide located on http support EMC com Virtual volumes are created on devices or distributed devices and presented to a host via a storage view Virtual volumes can be created only on top level devices and always use the full capacity of the device VPLEX st
171. o not communicate with each other The VMware cp copy command is thin friendly DRS VMotion and cold VM migration are unaffected VM clones and templates are problematic VM cloning fully allocate all blocks There is currently no workaround for this VMware templates also allocate all blocks The workaround is to shrink VMDKs before creating a template and use the Compact option VMware Virtual Machine File System with thin LUN 193 Virtual Provisioning ESX 4 x In ESX 4 x a virtually provisioned VNX series and CLARiiON LUN can be configured as zeroedthick or thin When using the thin virtual disk format the VMFS datastore is aware of the space consumed by the virtual machine In addition when using the VCenter features like Cloning Storage VMotion Cold Migration and Deploying a template with v4 x the zeroedthick or thin format remains intact on the destination datastore In other words the consumed capacity of the source virtual disk is preserved on the destination virtual disk and not fully allocated 194 EMC Host Connectivity Guide for VMware ESX Server Virtual Provisioning Virtual Provisioning with VNX series and CLARiiON This section provides examples of Virtual Provisioning implementation with VNX series and CLARiiON on VMware ESX 3 x and 4 x The following information is included e Virtual Provisioning on VMware ESX v3 x on page 195 e Virtual Provisioning on VMware ES
172. ocks Round Robin Stream IO VMAX or Symmetrix optimization 9 9 9 9 2 Adaptive EMC PowerPath VE provides dynamic multipath load balancing and automatic failover in the event of hardware failure PowerPath VE works with VMware ESX ESXi 4 x and ESXi 5 x asa multipathing plug in that enhances path management capabilities Dynamic load balancing is supported PowerPath VE distributes I O requests to devices across all available paths thus improving performance and reducing management time and down time by eliminating the need to configure paths statistically across devices PowerPath checks for live and dead paths periodically If live paths are found to be failed then they are marked as dead This allows PowerPath VE to check the path status to detect problems quickly When a failed path is repaired the path is automatically restored to its default path For more information about PowerPath VE refer to the following documents available on http support EMC com EMC PowerPath VE Version 5 4 for VMware vSphere Installation and Administration Guide EMC PowerPath VE Version 5 4 for VMware vSphere Licensing Guide EMC PowerPath VE Version 5 7 for VMware vSphere Installation and Administration Guide PowerPath VE for VMware ESX ESXi 4 x and ESXi 5 x 177 Multipathing in VMware ESX ESXi Server Major components The PowerPath VE configuration has two major components e PowerPath driver emcp
173. og for that array will show the Initiator WWNs for each host logged into the array An example of the Connectivity Status dialog box can be seen in Figure 7 on page 46 Fibre Channel EN Enterprise Storage 1 Fiter By al Storage Hosts E H E 8 B S E0 8B 09 27 54 21 00 00 EG 20 00 00 E0 88 09 27 64 21 00 00 E0 V4 20 00 00 E0 88 09 10 47 21 00 00 E0 F4 20 01 00 E0 88 28 10 47 21 01 00 E0 V 20 00 00 E088 08 80 48 21 00 00 EQ 8 A if 20 00 00 E0 88 06 80 46 21 00 00 E0 0 B ee 20 00 00 00 C9 24 43 54 10 00 00 00 ie 20 00 00 00 C 2 A3 54 10 00 00 00 B DO EO 8B 09 27 64 21 00 O0 EO GB 0 20 00 00 E0 8B 08 27 64 21 00 00 E0 GE 0 20 00 00 EO 8B 08 80 48 21 00 00 EQ 8B O 20 00 00 E0 8B 06 60 46 21 00 00 E0 6B 0 20 00 00 E0 88 09 10 47 21 00 00 E0 8B 0 20 01 00 E0 8B 29 10 47 21 01 00 E0 8B 2 20 00 00 00 C9 24 43 54 10 00 00 00 C9 20 00 00 00 C9 24 43 54 10 00 00 00 C9 ae 20 00 00 00 C9 36 00 B4 10 00 00 00 B Fd 20 00 00 00 C9 36 0D B3 10 00 00 00 Vd 20 00 00 E0 88 07 48 64 21 00 00 E0 B F4 20 00 00 E0 8B 04 49 20 21 00 00 EC 8 B e 20 00 00 E0 8B 07 4F 6A 21 00 00 EQ Show Storage System lnitiators Deregister New Register Info Group Edit OK Help 162bi175 ss emc com 182bi175 Iss emc com 182bi233 Iss emc com 182bi196 Iss emc com 182bi233 Iss emc com
174. oimesna Bon RH REESE HEREDI 124 EMC XtremilOs uina uson d o pH rH HM OD DIS IEEE 141 Array Information and Configuration Array Information and Configuration VMAX and Symmetrix array configurations This section provides the following VMAX and Symmetrix array support information specific to the VMware ESX server Required storage system configuration on page 94 Addressing VMAX or Symmetrix devices on page 94 EMC Symmetrix Management Console SMC on page 99 Recommendations for optimal performance on page 103 Required storage system configuration VMAX and Symmetrix system configuration is performed by an EMC Customer Engineer CE through the VMAX or Symmetrix Service Processor The CE will configure the VMAX or Symmetrix Storage Arrays settings for each Fibre Channel port The procedures in this document assume that any switches and storage systems to be used in this configuration have been installed and that the VMAX or Symmetrix Fibre Channel Adapter ports have been connected to the switch ports IMPORTANT EMC highly recommends using Volume Logix to mask LUNs To verify that the VMware ESX Server host can see all of the VMAX or Symmetrix target devices configure the host as described in the remainder of this section Addressing VMAX or Symmetrix devices Fabric addressing This section discusses the following Fabric addressing next SCSI 3 FCP
175. om passive paths ALUA mode allows I Os being issued to available non optimal paths to complete and boots the ESX host successfully Without ALUA failover mode a manual LUN trespass was required for sucha situation For more information about Boot from SAN refer to Appendix E Boot from SAN ESX host registration To connect an ESX host on a VNX series and CLARiiON systems with on a VNX series and ALUA failover mode you must set the failover mode to 4 using CLARIION system either the Unisphere Navisphere Manager or CLI This can be done on a per initiator basis 108 EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration As shown in Figure 21 choose Failover Mode 4 from the pull down menu All the initiators from the same host should be registered under the same failover mode E IRegister Initiator Record Initiator Information WWN IQN 50 00 09 72 08 24 30 00 50 00 09 72 08 24 31 60 SP port A 0 Fibre v Initiator Type CLARIiON VNX Failover Mode re Active mode ALUAJ failovermode 4 v Host Agent Information Active Active mode ALUA failovermod ta New Host Q Active Passive mode PNR failovermod Host Name AIX Active Passive mode PAR failover _ SSS IP Address Legacy failovermode 2 eee ey eee US Advanced Options K Cancel Figure 21 Register Initiator Record window Adding the VMware ESX
176. ommendations with those of other platforms before applying them E EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration When using XtremIO storage with VMware vSphere it is recommended to set the following parameters to their maximum values Disk SchedNumReqOutstanding Determines the maximum number of active storage commands I Os allowed at any given time at the VMkernel The maximum value is 256 Disk SchedQuantum Determines the maximum number of consecutive sequential I Os allowed from one VM before switching to another VM unless this is the only VM on the LUN The maximum value is 64 In addition the following parameter setting is recommended e Disk DiskMaxlOSize Determines the maximum I O request size passed to storage devices With XtremIO it is recommend to change it from 32767 default setting to 4096 Note For details on adjusting the maximum I O block size in ESX refer to VMware KB article 1003469 on the VMware website at http kb vmware com selfservice microsites search do cmd displayKC amp docType kc amp docTypeID DT_KB_1_1 amp externalld 1003469 Note When using vSphere 5 5 the Disk SchedNumReqOutstanding parameter can be set on a specific volume rather than on all volumes presented to the host Therefore it should be set only after XtremIO volumes are presented to the ESX host using ESX command line These setting adjustments shou
177. ool_1 is being used and is fully subscribed Pooks MAID Groups Pools PPA We RAID Type Al Y M State RAID Type vive Type User Capacity free Capacity Miecated GN Consumed Subscribed GB eSubscribed Q ro Poso cn Sead RAIDS SATA t1 1830 112 1598 098 2324014 Q mmo Post 1 o Ready RAID1 O SATA I 916 056 0 000 916 056 Virtual Provisioning with VNX series and CLARIION 203 Virtual Provisioning 3 Right click the Storage Pool and check the properties inthe General tab This test Storage Pool is 100 full on physical capacity General Disks Advanced Tiering r Properties Name Thin Pool 1 Description ID 1 RAID Type RAID1 0 State Ready Disk Type SATA II peration In Progress r Physical Capacity r Virtual Capacity 1 Percent Full 100 Percent Subscribed 114 192 Consumed Capacity 916 056 GB Total Subscribed Capacity 1046 064 GB User Capacity 916 056 GB Oversubscribed By 130 008 GB 4 Go to the VM console and examine the disk usage of the thin LUN sdb1 In this case 9376 is used which is equal to its physical capacity rootGVP rhel5 1lsi 4 df Filesystem 1K blocks Used Available Use Mounted on dev mapper VolGroup600 LogVol060 6030784 46098716 1620772 72 dev sdal 101086 14691 81176 16 boot tmpfs 192208 8 192208 6 dev shm dev sdbl 1056888680 923972624 79229240 93 mnt test When the thin LUN is full the file system on it becomes re
178. opriate directory usr sbin hbanyware e The firmware file for Emulex LP21000 CNA is ad100a5 all e The firmware file for Emulex LP21002 CNA is af100a5 all The BIOS file for Emulex LP2100x CNA is AB202A2 PRG Execute the command usr sbin hbanyware hbacmd ListHBAs and record the port WWN information for all adapters within the host To update the Emulex CNA firmware EMC Host Connectivity Guide for VMware ESX Server a Execute the command usr sbin hbanyware hbacmd Download WWPN lt firmwarefile gt where WWPN is the first Port WWN recorded from Step 4 and firmwarefile is the firmware file name listed in Step 3 The utility will report Download Complete when the firmware download has completed successfully b Repeat Step 5a for each adapter port WWN reported in Step 4 to ensure that all adapter instances within the host have been updated 6 To update the Emulex CNA Boot BIOS a Execute the command usr sbin hbanyware hbacmd Download lt WWPN gt BIOSfile where WWPN is the first Port WWN recorded from Step 4 and BIOSfile is the BIOS file name listed in Step 3 The utility will report Download Complete when the BIOS download has completed successfully b If the installed adapter is a dual ported model then the boot BIOS does not need to be downloaded to each adapter port downloading it to either port results in both ports being updated with the latest BIOS Otherwise for multiple single port adapter
179. or on page 106 ALUA mode in VNX series and CLARiiON systems is typically used with Fixed or Round Robin policy This can be enabled in VNX series and CLARiiON with ESX ESXi 4 x and ESXi 5 x by registering the initiator under Failover Mode 4 and making VNX series and CLARiiON act as an Active Active system In ALUA failover mode all paths are active and I O runs only to optimized paths owning SP For VNX series and CLARiiON systems not capable of ALUA failover mode support the MRU or Round Robin policies are recommended to avoid path thrashing EMC Host Connectivity Guide for VMware ESX Server Multipathing in VMware ESX ESXi Server Multipathing in VMware ESXi ESX 5 x and ESXi 4 x with VPLEX For optimal performance refer to the following recommendations The recommended multipathing setting is Round Robin for VPLEX Local VPLEX Metro non cross connect and VPLEX Geo The I O Limit value should be left at the default setting of 1000 For VPLEX Metro cross connect with VMware PowerPath VE is highly recommended PowerPath VE 5 8 includes the auto standby feature which allows each ESXi host to automatically prefer to send I O to its local VPLEX cluster over the remote cluster The host paths connected to the local VPLEX Cluster will be the active paths whereas those connected to the remote VPLEX Cluster will be the standby paths For more information on PowerPath VE and the auto standby feature see the foll
180. ores configuration and metadata on system volumes created from storage devices There are two types of system volumes Each is briefly discussed in this section e Metadata volumes on page 130 Logging volumes on page 131 VPLEX maintains its configuration state referred to as metadata on storage volumes provided by storage arrays Each VPLEX cluster maintains its own metadata which describes the local configuration information for this cluster as well as any distributed configuration information shared between clusters EMC Host Connectivity Guide for VMware ESX Server Logging volumes Required storage system setup Required VMAX or Symmetrix FA bit settings Array Information and Configuration For more information about metadata volumes for VPLEX with GeoSynchrony v4 x refer to the EMC VPLEX CLI Guide available on http support EMC com For more information about metadata volumes for VPLEX with GeoSynchrony v5 x refer to the EMC VPLEX with GeoSynchrony 5 0 Product Guide located on http support EMC com Logging volumes are created during initial system setup and are required in each cluster to keep track of any blocks written during a loss of connectivity between clusters After an inter cluster link is restored the logging volume is used to synchronize distributed devices by sending only changed blocks over the inter cluster link For more information about logging volumes for VPLEX with GeoSynchrony v4 x
181. orm balance between the two Storage Processors With PowerPath VE or Native Multipathing software with Fixed failover policy LUNs are auto restored to their original preferred path and SP In this scenario no manual trespass is required after the NDU is complete Before starting the NDU procedure ensure that there are no faults reported and no failed SPs on the array to be upgraded To performing an NDU complete the following steps 1 Verify the version of system code level currently running on the VNX series block or CLARiiON FLARE system 2 Select the desired software bundle to which the VNX series or CLARiiON system will be upgraded 3 Download the software bundle and extract on the package on the desktop 4 Loginto the VNX series or CLARiiON system via a web browser such as Microsoft Internet Explorer 5 For VNX series click on Launch USM from the column of options on the right of the management interface to begin the NDU process For CLARiiON systems right click on the array and select the Software Operations Software Installation Wizard to begin the NDU process For an example see Figure 23 on page 112 ESX host in the VNX series and CLARIION environment Array Information and Configuration Array Information and Configuration Enterprise Storage 1 Filter By fan Filter For z Storage Hosts Monitors Storage Systems Create RAID Group Bind LUN Create Storage Group
182. ory directly without going through a third party operating system VMkernel uses Scan Before Execution SBE to handle special or privileged CPU instructions To access other hardware including network and storage devices vmkernel modules are used Some of the modules are derived from the Linux kernel modules VMkernel can provide services including CPU scheduling memory management and virtual switch data processing for virtual machines to access the underlying physical hardware of the server where they are built on VMkernel manages all the operating systems on the machine including both the service console and the guest operating systems running on each virtual machine VMware ESX ESXi Server Introduction to VMware Infrastructure vSphere Introduction to VMware Infrastructure vSphere Service console vSphere 4 vSphere 5 VMkernel is usually interfaced with three major components hardware guest operating system and the service console The Service Console is available only with ESX not with ESXi Until vSphere 4 both ESX and ESXi were available vSphere 5 is only available with ESXi The service console gives access to VMkernel and thus can provide management services including firewall Simple Network Management Protocol SNMP agents and a web server to the ESX Server and the virtual machines built on the server For remote access to the service console by a root user through ssh client softwar
183. oting SPA When SP B is rebooted all of the LUNs owned by SP B will be trespassed to SPA The VMware ESX Server native failover functionality will handle this trespass so that I O may continue on the same LUNs now available via SPA When SPA is rebooted the reverse will occur EIN EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration The Failover Paths window on the VMware ESX Server may be used to monitor graphically the failovers from one path to another The installation wizard shows the current version of the system code level block or FLARE as well as the software enablers currently installed on the array The software enablers should be upgraded when the FLARE is upgraded so ensure that the downloaded package includes the software enablers corresponding to those currently installed Use the Browse option to search for the downloaded software bundle to be used for the upgrade Select the index file Ist and click Next to begin the NDU The Ist file will automatically go to the packages directory and will pull the necessary files from that directory Under the package name select operating environment Figure 26 on page 116 ESX host in the VNX series and CLARIION environment EIN Array Information and Configuration Storage System Software Installation CX700_3 12 70 Software Package Selection H E Currently Installed Software Navispher
184. owing support page at https support emc com products 1800 PowerPath VE for VM ware There are two issues using NMP for VPLEX Metro cross connect environments e Round robin path policy for a host connected to both VPLEX clusters will incur extra read and write latency for I O operations to the remote cluster Roughly half of the I O will be local and half will be remote WAN bandwidth for front end host traffic will be consumed Additional VPLEX inter cluster cache coherency traffic will be sent between clusters Fixed path policy requires much manual administrative work to have all ESXi hosts and all volumes on both clusters to prefer their local cluster For a handful of hosts and only a few volumes this might be acceptable But for hundreds of hosts and thousands of volumes this is too onerous In addition should the single preferred path fail for whatever reason the new path chosen by a host might be at the remote cluster It is possible that multiple hosts could all choose the same new remote director and thus overload that one director A manual rebalancing of paths would be required at the new cluster and then when the old cluster is back online the exercise must be repeated again Native multipathing in VMware ESX ESXi 4 x and ESXi 5 x 173 Multipathing in VMware ESX ESXi Server Changing default To change the default policy setting from Fixed to Round Robin for policy setting EMC VPLEX devices complete the fol
185. ows symcfg sid sid action script sh monitor i poll interval pool poolName thin percent threshold percentage For example bin sh Echo Warning Pool Space is reached threshold mark Thin pool error code When the pool becomes full under Enginuity 5874 or later writes to when full TDEVs fails and an IO error is returned This will generate an error message 0x4 0x44 0x0 in the vmkernel logs On the virtual machines you will receive an error the thin pool is filled var log vmkernel With ESX 4 0 and VMAX 5874 May 21 17 21 19 SGELVMW161 vmkernel 18 01 20 33 608 cpul1 4214 ScsiDeviceIO 747 Command 0x2a to device naa 60000970000194900098533031314431 failed H 0x0 D 0x2 P 0x0 Valid sense data 0x4 0x44 0x0 May 21 17 21 19 SGELVMW161 vmkernel 18 01 20 33 619 cpul1 4214 ScsiDeviceIO 747 Command 0x2a to device naa 60000970000194900098533031314431 failed H 0x0 D 0x2 P 0x0 Valid sense data 0x4 0x44 0x0 Virtual Provisioning with VMAX or Symmetrix 217 Virtual Provisioning With ESX 4 1 and VMAX 5874 Jun 22 19 42 12 SGELVMW199 ESX41 vmkernel 0 00 44 18 199 cpu0 4112 NMP nmp CompleteCommandForPath Command 0x2a 0x41027 9c2440 to NMP device naa 60000970000194900098533031314345 failed on physical path vmhba2 C0 T0 L33 H 0x0 D 0x2 P 0x0 Valid sense data 0x4 0x44 0x0 Jun 22 19 42 12 SGELVMW199 ESX41 vmkernel 0 00 44 18 199 cpu0 4112 ScsiDeviceIlO 1672 Command
186. pth may cause hosts to overstress other arrays connected to the ESX host resulting in performance degradation while communicating with them To avoid this in mixed environments with multiple array types connected to the ESX host compare the XtremIO recommendations with those of other platforms before applying them The queue depth setting controls the amount of outstanding I O requests per a single path On vSphere the HBA queue depth can be adjusted through the ESX CLI For Cisco UCS ENIC the I O Throttle setting determines the total number of outstanding I O requests per virtual HBA For optimal operation with XtremIO storage it is recommended to adjust the queue depth of the FC HBA e With Cisco UCS FNIC it is also recommended to adjust the I O Throttle setting to 1024 This section describes the required steps for adjusting I O throttle and queue depth settings for QLogic Emulex and Cisco USC FNIC Follow one of these procedures according to the vSphere version used Note For further information on adjusting HBA queue depth with ESX refer to VMware KB article 1267 on the VMware website available at http kb vmware com selfservice microsites search do language en US amp cmd displayKC amp externalld 1267 Note The setting adjustments in this section for Cisco UCS FNIC HBA apply to VMware vSphere only Since these settings are global to the UCS chassis they may impact other blades in the UCS chassis runnin
187. r Only SW iSCSI environments are not supported when booting from the SAN Booting from VMAX or Symmetrix storage arrays 233 Boot from SAN Booting from VNX series and CLARiiON storage systems VMware ESX Server v2 5 x hosts have been qualified for booting from CLARiiON devices interfaced through Fibre Channel as specified in the EMC Support Matrix EMC does not recommend booting VMware ESX Server from the VNX series and CLARiiON systems unless the host is using VMware s native failover functionality Note If VMware s native failover functionality is not used and it is necessary to use a VNX series and CLARiiON device as a boot disk you should shut down the Virtual Machines and the ESX Server during any maintenance procedures that might cause the boot disk to become unavailable to the host The VNX series and CLARiiON device that is to contain the Master Boot Record MBR for the host must have a lower logical unit number LUN than any other device visible to the host This device must be mapped as dev sda by the VMware ESX Server operating system for the boot to succeed from the device Note that this includes the case of the VNX series and CLARiiON ghost LUN If the LUN 0 presented to a SCSI path is not owned by the SP attached to that path a disconnected LUN 0 ghost LUN will be presented The ghost LUN is not write enabled so that the MBR cannot be written to it Always ensure that the boot LUN is owned by
188. rage architecture dz VMware NMP Figure 38 VMkernel pluggable storage architecture The default NMP is an extensible multipathing module in ESX ESXi 4 x and ESXi 5 x NMP provides a path selection algorithm based on the array type SATP Storage Array Type Plug in and PSP Path Selection Plug in are sub plug ins within the NMP module SATP is responsible for handling the path failover for given storage array It can perform array specific operation that require detecting the paths state and activating inactive paths It monitors the health of each physical path and reports the state change of each physical path to NMP e PSP is responsible in determining the physical path to be used to issue I O requests to the storage device NMP in ESX ESXi 4 x and ESXi 5 x supports MRU Fixed and Round Robin policy However in VMware ESX Server v3 5 Round Robin policy is experimental For more information on these policies refer to Path policies on page 168 176 EMC Host Connectivity Guide for VMware ESX Server Multipathing in VMware ESX ESXi Server PowerPath VE for VMware ESX ESXi 4 x and ESXi 5 x EMC PowerPath VE combines multiple path I O capabilities automatic load balancing and path failover functions into one integrated package It can support dynamic path failover and path recovery autorestore load balancing and failover policies including VNX series and CLARiiON optimization Least I O Least bl
189. raight forward 1 Right click the Storage Pool and choose Expand Jun 22 04 07 54 VP rhel5 lsi kernel end request I O error dev sdb sector To be able to continue use the Storage Pool you can expand it with available physical disks in the array This operation in Unisphere the mode you want The manual mode allows youto select specific disks Pools Fe RAID Type All v Name State RAID Type Drive Type User Capacity Free Capacity Allecated GB Consumed Subscribed GB Q me_poet_o off Ready RAIDS SATA I 1830 112 1598 098 232 014 a Delete Awyme gt Properties 2 There are Automatic and Manual modes to choose Choose r Thin Pool 1 Properties Bus 1 Enclosure 0 Dis 917 182 GB 721010KLCLA SATA II Unbou 9 Bus 1 Enclosure 0 Dis 917 182 GB 721010KLCLA SATA II Unbou 206 EMC Host Connectivity Guide for VMware ESX Server v Pool ID 1 RAID Type RAID1 0 User Capacity 916 056 GB Consumed Capacity 916 056 GB Available Capacity 0 000 GB Oversubscribed By 130 008 GB Number of disks to expand by 2 i MEN E v Disks Automatic O Manual Select Total Raw Capacity 1 Disk Capacity Model Drive Type State Virtual Provisioning Before commitment a system alert will display F Warning Expand Storage Pool Best practices suggest that for optimum performance the number of SATA disks chosen should be a multiple of 8 Do you wish to continue 3 Choose Yes
190. rch do cmd disp layKC amp docType kc amp docTypelD DT KB 1 1 amp externalld 1017760 XtremIO supports multipathing using EMC PowerPath VE This section describes the procedure required for configuring PowerPath multipathing for XtremIO volumes PowerPath VE 5 9 SP1 is the first PowerPath version to support Native LAM with the ExtremIO storage array This version is only qualified to run on vSphere 5 5 Native multipathing in VMware ESXi 5 5 VMware ESX Server provides support for native multipathing software as a feature of the OS with PowerPath VE 5 9 SP1 Native multipathing software is meant for failover only and Dynamic load balancing Native multipathing is the recommended multipathing choice For more information about PowerPath VE 5 9 refer to the following document available on http support EMC com EMC PowerPath VEInstallation and Administration Guide Version 5 9 and Minor Releases for VMware vSphere Generic multipathing in VMware ESXi 5 0 and ESXi 5 1 PowerPath VE version 5 8 is qualified with the following vSphere versions on XtremIO Storage supporting generic LAM vSphere 5 0 vSphere 5 1 PowerPath VE 5 9 is qualified following vSphere versions on XtremIO Storage supporting generic LAM vSphere 5 0 vSphere 5 1 vSphere 5 5 160 EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration Enabling generic LAM support To enable the generic LAM support on ESX with X
191. re 5 also supports the automated approach to manage data center resources including server deployment and storage management Customers define policies and establish the operating parameters and VMware vSphere 5 does the rest The three key features to enable the automated processes are e Auto Deploy Profile Driven Storage Storage DRS EM EMC Host Connectivity Guide for VMware ESX Server VMware ESX ESXi Server VMkernel VMware ESX Server is the main building block of the VMware infrastructure It provides a platform for multiple virtual machines sharing the same hardware resources including processor memory storage and networking resources the ability to perform all the functions of a physical machine This maximizes the hardware utilizing efficiency while minimizing installation capital and operating cost VMware ESX Server consists of two main components discussed briefly in this section VMkernel on page 19 Service console on page 20 Useful VMware ESX ESXi Server utilities and functions on page 21 The interaction between them forms a dynamic and reliable virtualized environment providing virtual machines with high availability resource management operational automation and security features that improve service quality levels even to the most resource intensive mission critical applications The ESX Server virtualized layer VMkernel runs on bare metal handling CPU and mem
192. remIO storage Topics include the following best practices Host hardware and operating cluster settings FC and iSCSI connectivity and configuration 9 Multipathing requirements and settings File system and application requirements FC Zones and IP subnetting are ways to control the access and traffic between initiators and targets This information is contained in the following sections Best practices for zoning and subnetting on page 141 Configuring a VMware vSphere host on page 144 Configuring Fibre Channel HBA on page 151 Configuring multipath software on page 156 o File system and application requirements on page 162 This section explains the best practices for allowing a host and the XtremIO cluster to communicate using 4 8 or 16 paths per device Note that the optimal number of paths depends on the operating system and server information Refer to your Fibre Channel or iSCSI switch user manual for instructions on the following implementations Single X Brick Cluster A host may have up to 4 paths per device EMC XtremlO 141 Array Information and Configuration The following figure displays the logical connection scheme for 4 paths EZIN e Dual X Brick Cluster A host may have up to 8 paths per device The following figure displays the logical connection schemes for 4 paths Switch A Switch B The following figure displays the logic
193. river Using the Brocade adapter with the VMware ESX ESXi Server requires CNA driver software Refer to the latest EMC Support Matrix for supported driver versions If the downloaded driver file is an iso image burn the iso image into a CD You can use the driver CD in several ways Install drivers for devices as part of a new ESX installation for ESX only on page 87 Update existing driver or install new drivers for an existing ESX installations using esxupdate for ESX only on page 88 Update existing driver or install new drivers for an existing ESX and ESXi installations using vihostupdate for both ESX and ESXi on page 88 ESX 4 x Server To install the Brocade CNA driver on the VMware ESX Server 1 Download the appropriate CNA driver package from the 10 Gbps ETHERNET TO PCIe CNAs section on http www brocade com hba Install the driver by following instructions under the Driver Packages section on the driver download page Configure the firewall as detailed in the Brocade Adapters Installation and Reference Manual available at http www brocade com Copy the driver package to a temporary directory Extract the file using the following command tar zxvf brocade driver esx35 version tar gz Run the installer with the following command brocade install sh Reboot the system Confirm the driver package was installed using the following command vmkload mod 1 An entry
194. river CD with the ESX 3 5 installation CD insert the ESX 3 5 installation CD and continue with ESX installation FCoE initiator configurations For ESX v4 x 1 o e SP N Place the ESX Installation CD in the CD ROM drive of the host machine Start the host machine Accept the terms of the license agreement Select a keyboard type When prompted for Custom Drivers select Yes to install custom drivers The installer prompts you to insert the media containing the custom drivers After you add the custom drivers to the list the installer prompts you to reinsert the ESX installation DVD and continue with the installation Update existing driver or install new drivers for an existing ESX installations using esxupdate for ESX only 1 2 3 Copy the bfa iso to some location such as root bfa bfa iso Create a temporary directory under tmp eg tmp bfa Mount the bfa iso file on the esx server such as mount t 1509660 o loop root bfa bfa iso tmp bfa Run the following command lusr sbin esxupdate bundle tmp bfa offline bundle bfa zip maintenancemode update Verify that the bfa modules shows up under usr lib VMware vmkmod directory Reboot the system Verify if the driver is loaded using the vmkload mod 1 command Update existing driver or install new drivers for an existing ESX and ESXi installations using vihostupdate for both ESX and ESXi Prerequisite Before you can update or
195. rmation 9 9 9 EMC VMAXS 400K 200K 100K EMC VMAX 40K EMC VMAX 20K EMC VMAX 10K Systems with S N xxx987xxxx EMC VMAXe EMC Symmetrix DMX 4 DMX 3 Required director bit settings for HP UX 11liv3 HP UX 11 31 initiators Refer to the following VMAX and Symmetrix DMX Director Bits Simple Support Matrices located at http elabnavigator emc com for the most up to date director port flags configuration requirements EMC VMAX 400K VMAX 200K and VMAX 100K Director Bit Settings and Features T10 and SRDF Metro EMC VMAX 40K VMAX 20K and VMAX 10K SN xxx987xxxx Director Bit Settings EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration EMC Symmetrix VMAX VMAX 10K SN xxx959xxxx and VMAXe Director Bit Settings e EMC Symmetrix DMX 3 and DMX 4 Director Bit Settings EMC Symmetrix Management Console SMC 7070 D http 10 32 138 238 SMC is a web based interface that allows you to discover monitor configure and control EMC Symmetrix arrays Many of the Solutions Enabler command line operations can be done in SMC Figure 17 shows a generic SMC interface EMC Symmetrix Management Console 10 37 138 238 7070 Microsoft Internet Explorer Ete Coto tee EMC Symmetts Managemert Console v7 00 6 CaRetresh View 2 189 New Alerts 0001 90300963 Locat 0 Syemetri Masking Host Bus Adagters ip Most Ales 10 Dj vectors 16 Cj Fire
196. rns the byte 0x00 in the first byte of the returned data for Inquiry the VMAX or Symmetrix system is identified as a direct access device Upon identifying the VMAX or Symmetrix system as an array controller device the host should issue a SCSI 3 Report LUNS command 0xA0 in order to discover the LUNs The three addressing modes contrasted in Table 5 differ in the addressing schema Target ID LUN and Virtual Bus and number of addressable devices Table 5 VMAX or Symmetrix SCSI 3 addressing modes Addressing mode Code A V Response to LUN Possible Maximum Bit Bit Inquiry discovery method addresses logical devices Peripheral Device 00 0 X 0x00 Direct Access LUNs directly 16 384 256 Access Logical Unit 10 1 0 Ox0C Array Host issues Report 2 048 128 Controller LUNS command Volume Set 01 1 1 Ox0C Array Host issues Report 16 384 512 Controller LUNS command a Bits 7 6 of byte 0 of the address b The actual number of supported devices may be limited by the type host or host bus adapter used SPC 2 director bit considerations Note The addressing modes are provided to allow flexibility in interfacing with various hosts In all three cases the received address is converted to the internal VMAX or Symmetrix addressing structure Volume Set addressing is the default for VMAX or Symmetrix systems Select the addressing mode that is appropriate to your host
197. s how a host handles having multiple communication paths to LUNS on the array by permitting I O to flow to either or both storage processors VNX series and CLARiiON Asymmetric Active Active is based on the Asymmetric Logical Unit Access ALUA standard The primary EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration ALUA feature allows any I O request to be sent to either Storage Processor SP and to be executed by the SP that owns the LUN However if a VMware ESX Server 4 x host loses all paths optimal path to the owning SP VMware native multipathing software will initiate a trespass using ALUA commands in order to maximize I O performance Without such trespass I O may be sent to the non owning non optimal SP and then redirected to the owning SP for processing without errors Use of ALUA failover mode has additional benefits when combined with VMware ESX Sever 4 x native multipathing software providing automatic restore of LUNs to its preferred paths after a fault has been repaired However this only applies to Fixed policy situation It does not apply to Round Robin policy The whitepaper EMC CLARiiON Asymmetric Active Active Feature ALUA available on http support EMC com provides an in depth discussion of ALUA features and benefits Path policy The use of ALUA failover mode allows users to choose from two considerations failover policies for VMware Native Multipathing N
198. se from the target Enable the SW iSCSI configuration 1 Click the Configuration tab and then click Storage Adapters 2 Select SW iSCSI adapter and then click Properties 3 Under the General tab click Configure 4 Under Status check the box next to Enabled 5 If required amend the iSCSI name and click OK Add send targets Note This steps is optional for ESX 4 x and ESXi 5 0 which may use Add static targets Add target addresses for the hardware initiator Note Both send and static targets discovery are supported To add send targets 1 Click the Configuration tab and then click Storage Adapters 2 Select SW iSCSI adapter and then click Properties 3 Click the Dynamic Discovery tab and then click Add 4 Enter the send targets server IP and click OK to add target information from a selected storage system Add static targets not applicable for ESX 3 5 Note ESX 4 x and ESXi 5 0 can use either Add Send targets or Add Static Targets 1 Click the Configuration tab and then click Storage Adapters 2 Select SW iSCSI adapter and then click Properties 3 Click the Static Discovery tab to add static targets and then click Add iSCSI EN 4 Enter the send targets server IP port and iqn address and click OK to add the static targets 5 Click Close to close the iSCSI Initiator Properties page Once the iSCSI initiator ports on the ESX Server are configured iSCSI storage must
199. series and CLARiiON storage EMC Symmetrix Management Console for VMAX and Symmetrix storage Command line based CLI for VNX series and CLARiiON storage Solution Enabler for VMAX and Symmetrix storage After changing the configuration rescan the host initiator adapter ports to update the changes made in Step 1 To do this choose one of the following methods Reboot the VMware ESX ESXi Server Rescan HBA port in vSphere Client Execute the esxcfg rescan command in the VMware ESX ESXi Server or in any other remote access software client such as PuTTY EMC Host Connectivity Guide for VMware ESX Server 2 Installation of ESX ESXI Server This chapter covers installation and setting information for the VMware ESX Server 3 x and 4 x and VMware FSX Server ESXi 5 including Instalati Osasin eerren NEEE NEEE EEES 26 Initiator setting and installation sss 28 dReconimepndalloftS a anm ati ERREUR REREERHERERREEEEERHU 32 Installation of ESX ESXi Server ESNM Installation of ESX ESXi Server Installation Installation media Installation methods This section contains the following information Installation media on page 26 Installation methods on page 26 CD ROM ESX Server can be installed directly from a CD ROM into a physical server ESX 4 x requires a DVD disk with larger size ISO image The ISO image can be loaded through a virtual CD ROM
200. session with an EMC Support Engineer eLicensing support To activate your entitlements and obtain your Symmetrix license files visit the Service Center on https support EMC com as directed on your License Authorization Code LAC letter e mailed to you For help with missing or incorrect entitlements after activation that is expected functionality remains unavailable because it is not licensed contact your EMC Account Representative or Authorized Reseller For help with any errors applying license files through Solutions Enabler contact the EMC Customer Support Center xs EMC Host Connectivity Guide for VMware ESX Server If you are missing a LAC letter or require further instructions on activating your licenses through the Online Support site contact EMC s worldwide Licensing team at licensing emc com or call North America Latin America APJK Australia New Zealand SVC4EMC 800 782 4362 and follow the voice prompts EMEA 353 0 21 4879862 and follow the voice prompts We d like to hear from you Your suggestions will help us continue to improve the accuracy organization and overall quality of the user publications Send your opinions of this document to techpubcomments emc com EMC Host Connectivity Guide for VMware ESX Server 15 ES EMC Host Connectivity Guide for VMware ESX Server Introduction to VMware Infrastructure vSphere This chapter provides information about
201. should be two separate paths to the same target 5 To remove VMkernel ports from iSCSl initiators ensure there are no active session between hosts and targets and run the following command from service console For vSphere 4 esxcli swiscsi nic remove n vmk0 d vmhba32 For vSphere 5 esxcli iscsi networkportal remove n vmk0 A vmhba32 VMware ESX HW iSCSI This section contains information on the VMware ESX HW iSCSI ESX 3 X HW iSCSI Supports both send and static targets discovery support ESX 4 x and ESXi 5 0 e Supports both send and static targets discovery P SCSI Suppor Supports ALUA mode failover mode 4 for VNX series and CLARiiON systems Supports EMC PowerPath VE Note For more information about EMC PowerPath VE refer to http www emc com Configuring HW iSCSI To configure ESX ESXi hardware initiator iSCSI HBA complete the initiator following steps iSCSI EN Log in to the VMware VI vSphere client as administrator From the inventory panel select the server to be configured Click the Configuration tab and click Storage Adapters Select HW iSCSI initiator to configure and click Properties mos I In the HW iSCSI Initiator Properties page click the General tab and then click Configure 6 Under iSCSI Properties can enter an iSCSI name and iSCSI Alias for the hardware initiator and then click OK 7 Under Hardware Initiator Properties add IP settings for the
202. ssume a LUN originally has four paths with the following notation vmhba3 3 5 vmhba3 4 5 vmhba4 3 5 vmhba4 4 5 The Virtual Machine configuration file contains a line referencing a path that is now dead scsi0 0 present TRUE scsi0 0 name vmhba3 3 5 0 A path fault occurs and raw disk device vmhba3 3 5 is no longer The user will need to locate the new canonical name for the LUN If all paths from vmhba3 are no longer accessible and assuming that target numbers have not changed then the canonical name will be vmhba4 3 5 The user will have to edit the Virtual Machine configuration file and change the scsi0 0 name value scsi0 0 present TRUE scsi0 0 name vmhba4 3 5 0 If Raw Device Mappings RDMs are used instead of raw disks and all the LUNs referenced by the RDMs are still accessible by the ESX Server using the original LUN numbers no user action is required ESX will access the LUNs via the currently available paths EN EMC Host Connectivity Guide for VMware ESX Server Raw Disk Usage and Mapping Raw Device Mappings RDM VMware ESX Server v2 5 0 onwards introduced a new technology called Raw Device Mapping RDM Essentially raw device mapping is a SCSI pass through technology that allows Virtual Machines to pass SCSI commands directly to the physical hardware This is an improvement over using raw devices Raw Device Mappings use a mapping file located on a VMFS volume that hold the metad
203. tem operation e VMware ESX Server operating environment EMC Host Connectivity Guide for VMware ESX Server a VMAX Symmetrix VNX and CLARiiON references Table 1 Related documentation Unless otherwise noted Any general references to Symmetrix or Symmetrix array include the VMAX3 Family VMAX and DMX Any general references to VNX include VNX5100 5200 5300 5400 5500 5600 5700 5800 7500 7600 8000 Any general references to CLARiiON or CLARiiON array include the FC4700 FC4700 2 CX CX3 and CX4 Table 1 lists the minimum EMC Enginuity and EMC HYPERMAX requirements for EMC VMAX and Symmetrix models EMC models and minimum operating system requirements EMC model Minimum operating system VMAX 400K 8 HYPERMAX 5977 250 189 VMAX 200K 8 HYPERMAX 5977 250 189 VMAX 100K HYPERMAX 5977 250 189 VMAX 40K Enginuity 5876 82 57 VMAX 20K Enginuity 5876 82 57 VMAX 10K Systems with SN xxx987xxxx Enginuity 5876 159 102 VMAX Enginuity 5876 82 57 VMAX 10K Systems with SN xxx959xxxx Enginuity 5876 82 57 VMAXe Enginuity 5876 82 57 Symmetrix DMX 4 Enginuity 5773 79 58 Symmetrix DMX 3 Enginuity 5773 79 58 a VMAX 400K VMAX 200K and VMAX 100K are also referred to as the VMAX3 Family For the most up to date information for supported server and HBA combinations always consult the EMC Support Matrix ESM available through E Lab Interoperability Navigator
204. the VMware infrastructure vSphere including oN irate Kop here ariana ed baie T E A 18 VMware ESX ESXI Servier eienn A rena eprb everti esa 19 COntTolinter ace or Err wa huh aanhaansuawh aw 23 Connecting EMC storage with ESX ESXi Server 24 Note In this document Virtual Infrastructure VI and vSphere are used interchangeably Introduction to VMware Infrastructure vSphere Introduction to VMware Infrastructure vSphere VMware vSphere VMware has renamed its infrastructure product to vSphere The following versions are briefly discussed in this section vSphere 4 on page 18 vSphere 5 on page 18 vSphere 4 VMware vSphere 4 is the industry s first cloud operating system transforming IT infrastructures into a private cloud a collection of internal clouds federated on demand to external clouds delivering IT infrastructure as a service vSphere 4 supports 64 bit VMkernel and service console The service console version is derived from a release of a leading enterprise Linux vendor vSphere 5 VMware vSphere 5 supports virtual machines VMs that are up to four times more powerful than previous versions with up to 1 terabyte of memory and 32 virtual CPUs These VMs are able to process in excess of 1 million I O operations per second When combined with VMware vSphere 5 s enhanced simplified High Availability performance and availability is greatly improved VMware vSphe
205. tions the bar means or Braces indicate content that you must specify that is x or y or z Ellipses indicate nonessential information omitted from the example EMC Host Connectivity Guide for VMware ESX Server Where to get help EMC support product and licensing information can be obtained as follows EMC support product and licensing information can be obtained on the EMC Online Support site as described next Note To open a service request through the EMC Online Support site you must have a valid support agreement Contact your EMC sales representative for details about obtaining a valid support agreement or to answer any questions about your account Product information For documentation release notes software updates or for information about EMC products licensing and service go to the EMC Online Support site registration required at https support EMC com Technical support EMC offers a variety of support options Support by Product EMC offers consolidated product specific information on the Web at https support EMC com products The Support by Product web pages offer quick links to Documentation White Papers Advisories such as frequently used Knowledgebase articles and Downloads as well as more dynamic content such as presentations discussion relevant Customer Support Forum entries and a link to EMC Live Chat EMC Live Chat Open a Chat or instant message
206. tives are set Generally organizations should target no higher than 60 percent to 80 percent capacity utilization per pool A buffer should be provided for unexpected growth or a runaway application that consumes more physical capacity than was originally planned for There should be sufficient free space in the storage pool equal to the capacity of the largest unallocated thin device Organizations should also balance growth against storage acquisition and installation timeframes It is recommended that the storage pool be expanded before the last 20 percent of the storage pool is utilized to allow for adequate striping across the existing data devices and the newly added data devices in the storage pool When using high availability in a cluster configuration it is expected that no single point of failure exists within the cluster configuration and that one single point of failure will not result in data unavailability data loss or any significant application becoming unavailable within the cluster Virtual provisioning devices thin devices are supported with cluster configurations however over subscription of virtual devices may constitute a single point of failure if an out of space condition should be encountered To avoid potential single points of failure appropriate steps should be taken to avoid under provisioned virtual devices implemented within high availability cluster configurations EMC Host Connectivity Guide for VMware
207. tremIO volumes complete the following steps 1 Open an SSH session to the host as root 2 Run the following commands e esxcli storage core claimrule add rule 340 pluginPowerPath type vendor vendor XtremIO model XtremApp esxcli storage core claimrule load esxcli storage core claimrule run 3 Reboot the ESX host Note Refer to EMC PowerPath VE for VMware Installation and Administration Guide for further information on PowerPath installation via command line Disabling generic LAM support To disable the generic LAM support on ESX with XtremIO volumes 1 Openan SSH session to the host as root 2 Run the following commands e esxcli storage core claimrule delete rule 340 e esxcli storage core claimrule load esxcli storage core claimrule run 3 Reboot the ESX host Known limitations of PowerPath generic LAM Unlike PowerPath s array customized LAMs PowerPath s generic LAM may not feature optimal failover and load balancing behaviors for the array it manages Note PowerPath generic LAM is a temporary solution An XtremIO customized LAM will be introduced in upcoming PowerPath VE releases The following limitations exist in volumes managed under the generic LAM powermt does not display the XtremIO array ID powermt does not display the XtremIO storage interface IDs powermt s port disable feature is not supported EMC XtremlO 161 Array Information and Configuration Post
208. tremely CPU intensive and lowers the overall server performance SW iSCSI does not support boot from SAN For more information on SW iSCSI refer to VMware ESX SW iSCSI on page 51 The TCP IP processing performance bottleneck has been the driving force behind the development of TCP IP offload engines TOE on adapter cards A TOE removes the TCP IP processing from the host CPU and completes TCP IP processing and packet creation on the HBA Thus a TCP IP offload storage NIC operates more like a storage HBA rather than a standard NIC This is often referred to as hardware HW iSCSI HW iSCSI supports boot from SAN For more information on SW iSCSI refer to VMware ESX SW iSCSI on page 51 EN EMC Host Connectivity Guide for VMware ESX Server VMware ESX SW iSCSI ESX 3 x SW iSCSI support Figure 12 ESX 4 x and ESXi 5 0 SW iSCSI support The setup of ESX server differs between SW and HW iSCSI ESX Server does not support both hardware and software initiators running simultaneously Note Refer to the EMC Support Matrix for supported HW iSCSI initiators Always ensure that the hardware iSCSI initiators are successfully installed and recognized by the system This section describes the steps required to configure SW iSCSI initiator ports Note SW iSCSI does not support boot from SAN iSCSI traffic should be isolated from other network traffic Supports only send targets discovery Requires
209. ty Guide for VMware ESX Server ESX Server 4 x ESX ESXi 4 x and ESXi 5 x Multipathing in VMware ESX ESXi Server For related information refer to the following VMware Knowledgebase article Changing the default pathing policy for new existing LUNs 1017760 located at http kb vmware com kb 1017760 VMware ESX Server 4 x uses the esxcfg mpath b command to view the configuration and status of the paths of device This command is executed from the service console command line and returns the path information for a LUN device In ESX ESXi 4 x and ESXi 5 x VMkernel storage stack has been restructured to address some key storage issues including multipathing handling The result of this restructuring is that VMware introduces a new Pluggable Storage Architecture PSA an open and flexible modular framework offered to third party vendors to allow integration of their multipathing software solutions with ESX ESXi platform In ESX ESXi 4 x and ESXi 5 x third party multipathing plug ins MPP can be installed along with default native multipathing plug ins NMP Multiple third party MPPs can also be installed and run parallel with NMPs As result third party MPPs replace behavior Native multipathing in VMware ESX ESXi 4 x and ESXi 5 x 175 Multipathing in VMware ESX ESXi Server of the default NMP module take over the control of path failover and implement load balancing operations of storage device pluggable sto
210. up on page 131 Required VMAX or Symmetrix FA bit settings on page 131 Supported storage arrays on page 132 Initiator settings on back end arrays on page 133 Host connectivity on page 133 Exporting virtual volumes to hosts on page 133 Front end paths on page 138 Configuring VMware ESX hosts to recognize VPLEX volumes on page 140 Refer to the following documents for configuration and administration operations 9 9 9 o EMC VPLEX with GeoSynchrony 5 0 Product Guide EMC VPLEX with GeoSynchrony 5 0 CLI Guide EMC VPLEX with GeoSynchrony 5 0 Configuration Guide EMC VPLEX Hardware Installation Guide EMC VPLEX Release Notes Implementation and Planning Best Practices for EMC VPLEX Technical Notes VPLEX online help available on the Management Console GUI EMC Host Connectivity Guide for VMware ESX Server Prerequisites VPLEX Procedure Generator available at https support emc com EMC Simple Support Matrix EMC VPLEX and GeoSynchrony available at http elabnavigator EMC com For the most up to date support information you should always refer to the EMC Support Matrix Before configuring VPLEX in the VMware ESX environment complete the following on each host Confirm that all necessary remediation has been completed This ensures that OS specific patches and software on all hosts in the VPLEX environment are at supported levels according to the EMC Support Matri
211. up to 128 or you can create a single full size extent using the entire capacity of the storage volume Note To claim storage volumes the GUI supports only the Claim Storage wizard which assigns a meaningful name to the storage volume Meaningful names help you associate a storage volume with a specific storage array and LUN on that array and helps during troubleshooting and performance analysis This section contains the following information Claiming and naming storage volumes on page 128 Extents on page 128 Devices on page 129 Distributed devices on page 129 Rule sets on page 129 Virtual volumes on page 130 Claiming and naming storage volumes You must claim storage volumes before you can use them in the cluster with the exception of the metadata volume which is created from an unclaimed storage volume Only after claiming a storage volume can you use it to create extents devices and then virtual volumes An extent is a slice range of blocks of a storage volume You can create a full size extent using the entire capacity of the storage volume or you can carve the storage volume up into several contiguous slices Extents are used to create devices and then virtual volumes EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration Devices Devices combine extents or other devices into one large device with specific RAID techniques such
212. used by the system to generate alerts when the Allocated Capacity passes the threshold An alert issued when the Full Threshold has been exceeded A thin pool whose thin pool enabled capacity is less than the sum of the reported sizes of the thin devices using the pool The ratio between the sum of the thin device subscribed capacity of all its bound thin devices and the associated thin pool enabled capacity This value is expressed as a percentage The ratio between the thin pool allocated capacity and thin pool enabled capacity This value is expressed as a percentage Virtual Provisioning Virtual Provisioning 187 Virtual Provisioning Traditional versus Virtual thin Provisioning Storage provisioning is the process of assigning storage resources to meet the capacity availability and performance needs of applications Traditional provisioning as shown in Figure 39 allocates the same amount of physical storage space on the storage system that is presented to the host operating system Host Application 2 Reported Application 3 Capacity Application reported Physically allocated Figure 39 Traditional storage provisioning With thin provisioning Figure 40 user capacity storage perceived by the application is larger than the actual allocated space on the storage system This simplifies the creation and allocation of storage capacity The provisioning decision is not bound by currently available p
213. ut how to lay out actual storage blocks across the disks in a Thin Pool Less experienced storage administrators will benefit from not having to be directly involved in the details of configuring storage The Mapping Service performs these functions adhering to performance best practices Storage administrators can manage LUN space utilization with the following thresholds Optional Full Threshold Alerts Optional Allocation Limit to control a runaway process Figure 43 on page 191 shows the LUN threshold and allocation limit Allocation Limit P dd i Full Threshold Allocated Capacity m eee TREO Oe meena ne Host Subscrib Capacity LUN threshold and allocation limit Storage administrators can monitor Thin Pools and add additional storage to them as required In addition they can reserve up to the host capacity of the Thin LUN The criteria for reserving Thin LUN storage includes Storage system must have available shared storage e Space is guaranteed but not necessarily allocated Space is consumed allocated on a use first basis Virtual LUN architecture and features Virtual Provisioning Virtual Provisioning Space can be non disruptively returned to the pool at any time Host Subscribed Reserved Gapacky Capacity Allocated Capacity Figure 44 Reserved capacity BC EMC Host Connectivity Guide for VMware ESX Server Virtual Provisioning VMware
214. ver mode set to 1 EMC Host Connectivity Guide for VMware ESX Server Multipathing in VMware ESX ESXi Server Multipathing in VMware ESX 3 x with CLARiiON For Native Multipathing software in ESX v3 x the Most Recently Used MRU policy is strongly recommended for VNX series and CLARiiON systems and any other Active Passive arrays The reason for this recommendation is the higher likelihood of path thrashing occurring on Active Passive types of arrays when using the Fixed failover policy Path thrashing occurs when two hosts are accessing the same LUN via different storage processors on a disk array In the case of VNX series and CLARiiON this may cause a loop where a LUN is trespassed from one storage processor to the other and back again Such thrashing behavior causes poor performance such as slow LUN access or no access to LUNs by the server With the MRU policy VMware ESX Server hosts will use one active path to a storage processor for I O processing A trespass will be forced to the other storage processor only when no paths are available to the current storage processor or there is no access to devices via that storage processor Therefore VMware ESX Server hosts using the MRU policy will quickly settle on accessing a LUN through the storage processor that is accessible to all hosts The MRU policy will not automatically restore the I O back to the initial path This limitation means that there is not an easy way to restore t
215. vers After you add the custom drivers to the list the installer prompts you to reinsert the ESX installation DVD and continue with the installation Update or add drivers on existing ESX installations using esxupdate for ESX only 1 Power on the ESX host and log into an account with administrator capability 2 Place the driver CD in the CD ROM drive of the ESX host 3 Mount the driver CD 4 Navigate to cd mount point offline bundle and locate the lt offline bundle gt zip file 5 Run the esxupdate command to install drivers using the offline bundle esxupdate bundle lt offline bundle gt zip update Update or add drivers on existing ESX and ESXi installations using vihostupdate for both ESX and ESXi Prerequisite Before you can update or patch an ESX ESXi host from the command line you must have access to a machine on which you can run the VMware vSphere Command Line Interface vSphere CLI You can install the vSphere CLI on your Microsoft Windows or Linux system or import the VMware vSphere Management Assistant vMA virtual appliance onto your ESX ESXi host To update or add drivers on existing ESX and ESXi installations using the vihostupdate command ES EMC Host Connectivity Guide for VMware ESX Server 1 Power on the ESX or ESXi host 2 Place the driver CD in the CD ROM drive of the host where either the vSphere CLI package is installed or vMA is hosted Mount the driver CD 4
216. virtual switch Adapter NIC Order Select one or more adapters From the Following list IF you select an adapter that is Summary attached to another virtual switch it will be removed From that virtual switch and added to this one Name Speed Network Unclaimed Adapters L H8 vmit down Nane H3 vmnicz 1000 Full 0 0 0 1 127 255 255 254 SwitchO Adapters L1 EB vmnico 1000 Full D 0 0 1 127 255 255 264 x Back ness Cancel 6 Check the box next to each adapter that you want to connect to the vSwitch and click Next 7 Click Next 8 Review the Summary and if all of the settings are correct click Finish EN EMC Host Connectivity Guide for VMware ESX Server Set up 1 1 VMkernel to network adapters mapping To set up 1 1 VMkernel to network adapters mapping complete the following steps 1 Inthe vSwitch Properties window shown in the next figure select the VMkernel and click Edit ySwitch Properties Ports Network Adapters ir jeu gs Configuration Summary IP Address 12 10 10 239 f vsnich 56 Ports Subnet Mask 255 255 255 0 Wekernel 2 Motion and IP Mkernel VMotion and IP Effective Policies Security Promiscuous Made Reject MAC Address Changes Accept Forged Transmits Accept Traffic Shaping Average Bandwidth Peak Bandwidth Burst Size Failover and Load Balancing Load Balancing Port ID Network Failure Detection Link Status only Notify Switches Yes
217. volume manualy or use the Add Multiple button to add ejtipie volumes M once Example sres 49MB 1236 478 Jegacy Windows offset 63 EC EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration Disk alignment Unaligned disk partitions may substantially impact I O to the disk With vSphere data stores and virtual disks are aligned by default as they are created Therefore no further action is required to align these in ESX With virtual machine disk partitions within the virtual disk alignment is determined by the guest OS For example with virtual machines running older Windows version such as Windows 2003 or XP disk partitions have a 63B offset by default that is they are unaligned In such cases it is necessary to ensure that for new virtual machines guest OS disk partitions are aligned as they are formatted on the virtual disk For existing virtual machines consider using tools such as UBERalign to realign the disk partitions as required Virtual Machine For optimal performance it is recommended to format virtual formatting machines on XtremIO storage using Thick Provision Eager Zeroed Using this format the required space for the virtual machine is allocated and zeroed on creation time However with native XtremIO data reduction thin provisioning and VAAI support no actual physical capacity allocation occurs Thick Provision Eager Zeroed format advantages are Logical
218. ws hpinto70 3 default 0x20000000c9880331 0x10000000c9880331 Intiators hpinto71 1 hpux 0x20000000c9880526 0x10000000c9580526 Ports bpintoza 2 hpux 0x20000000c9880527 0x10000000c9880527 v C3 irtualized Storage hpint072 1 hpux 0x5001438001456803 0x5001438001456802 SG Virtual volumes ip hnintgz2 2 hpux 0x5001438001456801 0x5001438001456800 Devices hpint075 1 hpux 0x50014380014567ff 0x50014380014567fe Nu hpint075 2 hpux 0x50014380014567fd 0x50014380014567fc 6 Storage Volumes hoint 054 1 hpux 0x500110200018386b 0x500110200018306a _ Storage Arrays 21 o 00051e0cie21 1 Selected 50 items Renter meters Manualy _ User service Inbox Mi ZB https 17 remote ac Figure 35 Register initiators Note When initiators are registered you can set their type as indicated in Table 8 on page 140 EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration 3 Adding ports to the storage view as shown in Figure 36 t Internet Explorer Address IS smsflex VPlexConsole html v 1 Select Initiators 2 Select Ports Provision Storage 3 Select Virtual Volumes Add Ports for Selected Init Distributed Stc 2 jie g ve S5 a ped Batt Bevin rd Available Ports Selected Ports L4 te View Distributed Devices ane erem 2 lected Storage View v 3 Clusters luster L Ni P000000003B201E0B A0 FCOO P00
219. x Confirm that each host is running VPLEX supported failover software and has at least one available path to each VPLEX fabric Note Always refer to the EMC Support Matrix for the most up to date support information and prerequisites e Ifa hostis running EMC PowerPath confirm that the load balancing and failover policy is set to Adaptive IMPORTANT For optimal performance in an application or database environment ensure alignment of your host s operating system partitions to a 32 KB block boundary Provisioning and exporting storage This section provides information for the following VPLEX with GeoSynchrony v4 x on page 126 VPLEX with GeoSynchrony v5 x on page 127 EMC VPLEX 125 Array Information and Configuration Array Information and Configuration VPLEX with GeoSynchrony v4 x To begin using VPLEX you must provision and export storage so that hosts and applications can use the storage Storage provisioning and exporting refers to the following tasks required to take a storage volume from a storage array and make it visible to a host 1 Discover available storage 2 Claim and name storage volumes 3 Create extents from the storage volumes 4 Create devices from the extents 5 Create virtual volumes on the devices 6 Create storage views to allow hosts to view specific virtual volumes 7 Register initiators with VPLEX 8 Add initiators hosts virtual volumes and VP
220. y Guide for VMware ESX Server The thin pool is enabled after the data device is added Note that there are four data device in the pool out of which only three have been enabled symcfg sid sid show pool poolName thin detail symcfg sid 98 show pool TP thin detail Symmetrix ID 000194900098 Symmetrix ID 000194900098 Pool Name TP Pool Type Thin Dev Emulation FBA Dev Configuration 2 Way Mir Pool State Enabled of Devices in Pool 4 of Enabled Devices in Pool 3 Max Subscription Percent None Enabled Devices 3 Sym Total Alloc Free Full Device Dev Tracks Tracks Tracks State 1900 65544 0 65544 0 Enabled 1901 65544 0 65544 0 Enabled 1902 65544 0 65544 0 Enabled Tracks 196632 0 196632 0 No Thin Devices Bound to Device Pool TP Step 5 Bind thin devices to the thin pool Several thin devices can be bound to a single thin pool However each thin device can only be bound to a thin pool When binding several TDEV to a thin pool it is important to note the subscription limit of the pool Over subscription occurs when TDEVs bound to a thin pool is greater than the total capacity of TDAT in the pool Over subscription creates a risk when the pool becomes full Writes to TDEVs will fail and an IO error is returned Storage administrators have to monitor the pool threshold and prevent a pool full condition Data devices should be added to the pool before the pool fills up
221. y Information and Configuration Configure director bit You can configure director bit settings for the ESX host by using the setting using SMC SMC Director bit settings can be easily set by configuring Set Port Attributes for a specific director port as shown in Figure 20 E Set Port Attributes 000190300963 Director Port FA 26 0 Copy From Director Port v 7 VCM State 7 Common Serial Number 7 Unique VAWN 7 int Point to Point C Volume Set Addressing 7 Avoid Reset Broadcast C Negotiate Reset C Enable Auto Negotiate C Environ Set 7 Disable Q Reset on UA O Soft Reset 7 Scsi3 C SCSI Supporti 052007 C Non Participating 7 SPC2 Protocol Version 7 HP3000 Mode C Server On AS400 C Enable AS400 C Open VMS Add to Config Session List Cancel Help Figure 20 Set Port Attributes dialog box Br EMC Host Connectivity Guide for VMware ESX Server Array Information and Configuration Note The particular director port must be taken offline before performing a configuration change session Once the change completed and the port is set to online the SMC console must be refreshed in order to view the changes in Properties view For more information about the Symmetrix Manager Console refer to the following documents available on http support EMC com e Symmetrix Management Console Release Notes e Symmetrix Management Console Installation Guide e Symmetrix Management Consol

Download Pdf Manuals

image

Related Search

Related Contents

  Mode d`emploi  Nouvelles de Chine 2014  Sanyo 42SR1  Manual Planta Térmica - MA-TS-1029  SOLAR CHARGE CONTROLLERS  ワイヤレスベースステーション/中継機  la4 preset library 4.0 la4x preset library 4.0 la8 preset library 4.0    mSecure Reviewers Guide  

Copyright © All rights reserved.
Failed to retrieve file