Home

Field Installation Guide - Virtualization solution with a nuts

image

Contents

1. Ipmi password Specifies the password for the IPMI console The default password is ADMIN hypervisor_ip_start Sets the starting IP address for the hypervisor address range hypervisor_netmask Sets the hypervisor netmask value This is a global setting across all nodes hypervisor_gateway Specifies the IP address of the router for the hypervisor range hypervisor_nameserver Specifies the IP address of the name server in the hypervisor range The default value is 8 8 8 8 hypervisor_password Specifies the password for the hypervisor console The default and required name is nutanix 4u 6 Verify that the IP addresses were assigned correctly by pointing a web browser at the expected IPMI IP address It may take up to a minute after the completion of assignment for the IPMI interfaces to be available Field Installation Guide NOS 3 5 21 7 Review the generated_orchestrator_cfg txt file for accuracy and completeness A summary of the configuration information is written to the generated_orchestrator_cfg txt file that can be used as the basis for an orchestror config file see Completing Installation on page 22 You can manually update this file as needed nutanix nutanix installer orchestrator File Edit View Search Terminal Help nutanix nutanix installer orchestrator cat generated orchestrator cfg txt 3 ipmi user ADMIN ipmi_password ADMIN hypervisor netmask 255 255 255 0 hypervisor gateway 192 168 270 1
2. Warning This disk will be repartitioned Figure ESXi Installation Confirmation Screen The installation begins and a dynamic progress bar appears 16 When the Installation Complete screen appears go back to the Virtual Storage screen see step 9 click the Plug Out button and then return to the Installation Complete screen and click Reboot After the system reboots you can install the NOS Controller VM and provision the hypervisor see Installing the Controller VM on page 28 Installing the Controller VM This procedure describes how to install the NOS Controller VM and provision the previously installed hypervisor on a single node in a cluster in the field Before you begin e Install a hypervisor on the node see nstalling a Hypervisor on page 25 To install the Controller VM and provision the hypervisor on a new or replacement node do the following 1 Copy the appropriate Phoenix ISO image file from the Orchestrator portal see Orchestrator Portal on page 31 to a temporary folder on the workstation You can download it to the same folder as the hypervisor ISO image Phoenix is the name of the installation tool used in this process There is a Phoenix ISO image file for each supported NOS release See the Phoenix Releases section in Orchestrator Portal on page 31 for a list of the available Phoenix ISO images su Caution Phoenix release 1 0 1 is the earliest supported release do not use a Phoenix ISO vy image from
3. NUTANIX Field Installation Guide Orchestrator 1 0 12 Feb 2014 Copyright Copyright 2014 Nutanix Inc Nutanix Inc 1740 Technology Drive Suite 150 San Jose CA 95110 All rights reserved This product is protected by U S and international copyright and intellectual property laws Nutanix is a trademark of Nutanix Inc in the United States and or other jurisdictions All other marks and names mentioned herein may be trademarks of their respective companies License The provision of this software to you does not grant any licenses or other rights under any Microsoft patents with respect to anything other than the file server implementation portion of the binaries for this software including no licenses or any other rights in any hardware or any devices or software that are used to communicate with or in connection with this software Conventions Convention variable value ncli gt command user host command root host command gt command output Default Cluster Credentials Interface Nutanix web console vSphere client SSH client or console SSH client or console Description The action depends on a value that is unique to your environment The commands are executed in the Nutanix nCLl The commands are executed as a non privileged user such as nutanix in the system shell The commands are executed as the root user in the hypervisor host vSphere or KVM shell The commands are execu
4. 2552552529 10 1 60 1 hypervisor_nameserver 8 8 8 8 hypervisor_password nutanix 4u 10 1 60 41 hypervisor_ip 10 1 60 42 hypervisor_ip 10 1 60 43 hypervisor_ip 10 1 60 44 hypervisor_ip 10 1 60 33 10 1 60 34 10 1 60 35 10 1 60 36 Field Installation Guide NOS 3 5 33 Setting IPMI Static IP Address You can assign a static IP address for an IPMI port by resetting the BIOS configuration To configure a static IP address for the IPMI port on a node do the following 1 2 5 6 7 8 Connect a VGA monitor and USB keyboard to the node Power on the node Press the Delete key during boot up when prompted to enter the BIOS setup mode The BIOS Setup Utility screen appears Click the IPMI tab to display the IPMI screen gt BMC Network Configuration Select BMC Network Configuration and press the Enter key Select Update IPMI LAN Configuration press Enter and then select Yes in the pop up window Update IPMI LAN Configuration Select Configuration Address Source press Enter and then select Static in the pop up window DHCP Select Station IP Address press Enter and then enter the IP address for the IPMI port on that node in the pop up window Field Installation Guide NOS 3 5 34 Station IP Address 010 001 059 029 si 9 Select Subnet Mask press Enter and then enter the corresponding submask value in the pop up window 10 Select Gateway IP Address
5. Starting SMCIPMITool 20140208 2327 Disconnecting virtual media 20140208 2328 Attaching virtual media home nut 192 168 20 103 ESX Rebooting 37 0 anix orchestrator esx_node_isos esx_install_9 iso 20140208 2328 Resetting node 20140208 2328 Powering up node 20140208 2334 esx_installing 192 168 20 104 ESX Rebooting 37 0 20140208 2339 esx_rebooting 20140208 2339 Exiting SMCIPMITool 192 168 20 105 ESX Rebooting 37 0 Figure Nutanix Installer Progress Bars When processing is complete a green check mark appears next to the node name if IPMI configuration and imaging was successful or a red x appears if it was not At this point do one of the following e Status There is a green check mark next to every node This means IPMI configuration and imaging both hypervisor and NOS Controller VM across all the nodes in the cluster was successful At this point you can configure the cluster normally as you would after receiving pre installed nodes from the factory See the Nutanix Setup Guide for instructions on configuring a Nutanix cluster e Status At least one node has a red check mark next to the IPMI address field This means the installation failed at the IPMI configuration step To correct this problem see Fixing IPMI Configuration Problems on page 15 e Status At least one node has a red check mark next to the hypervisor address field This means IPMI configuration was successful across the cluster but imaging failed The defaul
6. Guide NOS 3 5 10 Terminal File Edit View Search Terminal Help Network Configuration Name Device Use DHCP Static IP Netmask Default gateway IP Primary DNS Server Secondary DNS Server lt Tab gt lt Alt Tab gt between elements lt Space gt selects lt F12 gt next screen Figure Orchestrator VM Network Configuration BOK f Click the Save button in the Select a Device box and the Save 8 Quit button in the Select Action box This save the configuration and closes the terminal window 16 Copy the desired Phoenix ISO image file from the Orchestrator portal see Orchestrator Portal on page 31 to the home nutanix orchestrator isos phoenix folder Phoenix is the name of another installation tool used in this process There is a Phoenix ISO image file for each supported NOS release See the Phoenix Releases section in Orchestrator Portal on page 31 for a list of the available Phoenix ISO images Caution Phoenix release 1 0 1 is the earliest supported release do not use a Phoenix ISO image from an earlier release 17 Download the desired hypervisor ISO image to the home nutanix orchestrator isos hypervisor folder Customers must provide the ESXi ISO image from their purchased copy it is not provided by Nutanix Check with your VMware representative or download it from the VMware support site http www vmware com support html The following table lists the supported hypervisor i
7. N in the previous step and update them if they are not correct The Hypervisor Type Hypervisor Version Node Model and Nutanix Software fields cannot be edited b Do one of the following e Ifyou are imaging a U node select both Clean Install Hypervisor and Clean Install SVM e If you are imaging an X node select Clean Install Hypervisor only A U node is a fully configured node which can be added to a cluster Both the Controller VM and the hypervisor must be installed in a new U node An X node does not includes a NIC card or disks it is the appropriate model when replacing an existing node The disks and NIC are transferred from the old node and only the hypervisor needs to be installed on the X node 4a Caution Do not select Clean Install SVM if you are replacing a node X node because vy this option cleans the disks as part of the process which means existing data will be lost c When all the fields are correct click the Start button Field Installation Guide NOS 3 5 29 Virtual Media Record Macro Options UserList Capture Power Control Exit lt lt Nutanix Installer gt gt Hypervisor Type Hypervisor Version Node Model Node Position Nutanix Software Block ID 135M66480025 Node Serial ZM1395125180 Node Cluster ID a 259 x Clean Install Hypervisor 1 Clean Install SUM Please hit the spacebar to toggle these checkboxes Cancel Version master_ Baa3d16 Installation begins and takes ab
8. an earlier release 2 Inthe IPMI web console attach the Phoenix ISO to the node as follows a Go to Remote Control and click Launch Console if it is not already launched Accept any security warnings to start the console b In the console click Media gt Virtual Media Wizard c Click Browse next to ISO Image and select the ISO file d Click Connect CD DVD Field Installation Guide NOS 3 5 28 Virtual Media A Floppy Image Dona Connect Floppy CD Media Disconnect Status g Target Drive Connected To Read a Vv irtual Floppy Not connected n a Virtual CD C Users ben Documents software svmrescu 184 KB Close e Go to Remote Control gt Power Control f Select Reset Server and click Perform Action The host restarts from the ISO 3 At the prompt enter Y to accept the factory configuration or N if the node position value is not correct Virtual Media Record Macro Options UserList Capture Power Control Exit The Nutanix installer has detected an existing factory configuration Block ID 135M66480025 Node UUID 1d2883c6 1b 2 4f d6 a039 6735686a5663 Node Serial ZM1395125180 Node Position A Cluster ID 59 Model NX 6060 Would you like to re use this information Y N _ 4 Do the following in the Nutanix Installer configuration screen a Review the values in the Block ID Node Serial and Node Cluster ID fields and Node Model if you entered
9. of the nodes in the cluster Configuring the environment for installation requires setting up network connections installing Oracle VM VirtualBox on the workstation downloading ISO images and using VirtualBox to configure various parameters To prepare the environment for installation do the following 1 Connect the first 1GbE network interface of each node middle RJ 45 interface to a 1GbE Ethernet switch The nodes must be connected through shared IPMI ports Another option is to connect a 10 GbE port and the IPMI 10 100 port or the first 1GbE port This provides more bandwidth for installation but requires additional cabling 10 GbE Ports on NIC IPMI Port 1 GbE Ports a EE P e f TRARRE 1 fi ial Ti LL ATTE USB Ports USB Ports on NIC VGA Port Figure Port Locations NX 3050 2 Connect the installation workstation laptop or desktop machine used for this installation to the same 1GbE switch as the nodes The installation workstation requires at least 3 GB of memory Orchestrator VM size plus 1 GB 25 GB of disk space preferably SSD and a physical wired network adapter 3 Go to the Orchestrator portal see Orchestrator Portal on page 31 and copy the orchestrator_bundle_version tar gz file using the scp or wget copy utility to a temporary directory on the installation workstation The version in the file name is the version number for example orchestrator_bundle 1 0 tar gz for version 1 0 Orchestrat
10. arameter Value Global Parameters IPMI netmask IPMI gateway IP address IPMI username default is ADMIN IPMI password default is ADMIN Hypervisor netmask Hypervisor gateway Hypervisor name server DNS server IP address Node Specific Parameters Starting IP address for IPMI port range Starting IP address for hypervisor port range To install the Controller VM and hypervisor on the cluster nodes do the following 1 Click the Nutanix Orchestrator icon on the Orchestrator VM desktop to start the Nutanix Installer GUI Note See Preparing Installation Environment on page 7 if Oracle VM VirtualBox is not started or the Nutanix Orchestrator VM is not running currently You can also start the Nutanix Installer GUI by opening a web browser and entering http localhost 8000 gui index html Field Installation Guide NOS 3 5 12 ba m pN p Computer stage nutanix s Home n gt orchestrator Trash set_orchestrator_ip_ address Terminal Figure Orchestrator VM Desktop The Nutanix Installer screen appears The screen contains three sections global hypervisor and IPMI details at the top node information in the middle and ISO image links at the bottom Upon opening the Nutanix Installer screen Orchestrator beings searching the network for unconfigured Nutanix nodes and displays information in the middle section about the discovered nodes The discovery process can take several minutes or longer if the c
11. created in step 6 8 Start Oracle VM VirtualBox Sy Oracle VM VirtualBox M boa Y Oracie VM VirtualBox AA aa ee De file Machine Help aa Gx New Welcome to VirtualBox The left part of this window is a ist of all virtual machines on your computer The ist is empty now because you haven t created any virtual machines yet e In order to create a new virtual machine press the New button in the main tool bar located at the top of the window gt s zx You can press the F1 key to get instant help or visit www virtualbox org for the latest information and news af Figure VirtualBox Welcome Screen 9 Click the Machine option of the main menu and then select Add from the pull down list 10 Navigate to the Orchestrator_WM folder select the Orchestrator_vm_versionf file and then click Open The versions in the file name is the version number for example Orchestrator_vm_1 for version 1 0 11 Select Nutanix_Installer which is the Orchestrator VM in the left panel of the VirtualBox screen 12 Click Settings left panel and do the following a Click Network in the left panel of the Settings screen b Click the Adapter 1 tab right panel c Verify the following items Enable Network Adapter box is checked enabled e Attached to field is set to Bridged Adapter e Name is set to your workstation s physical wired network adapter not a wireless adapter d When the values are correct click the OK button b
12. des are imaged in parallel and the imaging process takes about 45 minutes Note Simultaneous processing is limited to a maximum of eight nodes If the cluster contains more than eight nodes the total processing time is about 45 minutes for each group of eight nodes Processing occurs in two stages First the IPMI port addresses are configured If IPMI port addressing is successful the nodes are imaged No progress information appears in the GUI during the IPMI port configuration processing which can take several minutes or longer depending on the size of the cluster You can watch server progress by viewing the service log file in a terminal cd home nutanix orchestrator log 88 tail f service log Field Installation Guide NOS 3 5 14 When processing moves to node imaging the GUI displays dynamic status messages and a progress bar for each node A blue bar indicates good progress a red bar indicates a problem Processing messages for starting installing rebooting and succeeded installed appear during each stage Click on the progress bar for a node to display the log file for that node on the right Nutanix Installer Installing 37 0 ES CA Log 192 168 20 101 Refresh 192 168 20 101 ESX Rebooting 37 0 20140208 2327 IPMI node IP 192 168 260 101 20140208 2327 Making node specific ESX image 20140208 2327 Installing ES 192 168 20 102 ESX Rebooting 37 0 20140208 2327 Booting node A O 20140208 2327
13. des in the cluster and then configuring a static IP address for the IPMI interface on each node see Preparing Nutanix Nodes on page 19 2 Complete the installation This requires creating a configuration file and then running the Orchestrator installation tool see Completing Installation on page 22 Preparing Nutanix Nodes Manually imaging a cluster in the field requires first configuring the IPMI ports of all the new nodes in the cluster to a static IP address The Orchestrator installation tool includes a discovery py utility that can find the nodes on a LAN or VLAN and configure their IPMI addresses To configure the IPMI port addresses in a cluster using this utility do the following 1 Power on all the nodes in the cluster Wait at least 10 minutes after powering up a node for the hypervisor and Controller VM to finish booting 2 In the Nutanix Orchestrator VM right click on the desktop and select Open in Terminal from the pull down menu Note See Preparing Installation Environment on page 7 if Oracle VM VirtualBox is not started or the Nutanix Orchestrator VM is not running currently 3 In the terminal window go to the Orchestrator directory cd home nutanix orchestrator This directory contains Orchestrator related files 4 Enter the following command to discover any new nodes in a cluster discovery py discover This command finds nodes on the same LAN or VLAN segment that are not yet part of a cluster a
14. either through the IPMI interface network connection required or through a direct attached USB no network connection required In either case the installation is divided into two steps 1 Install the desired hypervisor version see Installing a Hypervisor on page 25 2 Install the NOS Controller VM and provision the hypervisor see nstalling the Controller VM on page 28 Installing a Hypervisor This procedure describes how to install a hypervisor on a single node in a cluster in the field To install a hypervisor on a new or replacement node in the field do the following 1 Connect the IPMI port on that node to the network A 1 or 10 GbE port connection is not required for imaging the node 2 Assign an IP address static or DHCP to the IPMI interface on the node To assign a static address see Setting IPMI Static IP Address on page 34 3 Download the appropriate hypervisor ISO image to a temporary folder on the workstation Customers must provide the ESXi ISO image from their purchased copy it is not provided by Nutanix Check with your VMware representative or download it from the VMware support site http www vmware com support html The following table lists the supported hypervisor images Hypervisor ISO Images File Name MD5 Sum Hypervisor Version VMware VMvisor fa6a00a3f0dd0cd1a677f69a23661 1e2 ESXi Installer 5 0 0 update02 914586 x86_64 iso 5 0U2 VMware VMvisor 2cd15e433aaacc7638c706e013dd673a ESXi Insta
15. er imaging a cluster through Orchestrator If you want to use the same Orchestrator VM to image another cluster the persistent information must be removed before attempting another installation To remove the persistent information after an installation do the following 1 Open a terminal window and go to the Orchestrator home directory cd home nutanix orchestrator 2 Remove the persistent information by entering the following command rm persisted config json 3 Restart the Orchestrator service by entering the following command sudo etc init d orchestrator service restart Field Installation Guide NOS 3 5 18 Imaging a Cluster manual method This procedure describes how to manually install the NOS Controller VM and selected hypervisor on all the new nodes in a cluster from an ISO image on a workstation Before you begin e Physically install the Nutanix cluster at your site See the Physical Installation Guide for your model type for installation instructions e Set up the installation environment see Preparing Installation Environment on page 7 Note The standard method see maging a Cluster standard method on page 12 is recommended in most cases The manual procedure is available when the standard method Is not an option Manually installing the Controller VM and hypervisor on the cluster nodes involves the following tasks 1 Prepare the cluster nodes for installation This requires identifying all the no
16. es button to image that node again You can also retry imaging by clicking the Retry Imaging Failed Nodes link at the top of the status bar page Block Serial Node IPMI IP Hypervisor IP 1 14SM36030032 A 192168 20101 192168 20121 ii o A 192168 20103 192168 20123 D 5 192168 20104 E 192168 20124 J pi t 4 Refrest Hypervisor ISO Image VMware VMvisor installer 5 1 0 update01 1065491 x86_ 64 iso Refresh k Image Nodes Figure Nutanix Installer Imaging Problem configuration screen The imaging process starts again for the failed node s Nutanix Installer Installing 18 5 A CA 192 168 20 102 ESX Installing 18 5 Log 192 168 20 102 Refresh A 20140209 215130 IPMI node IP 192 168 20 102 20140209 215130 Making node specific Phoenix ima ge 20140209 215143 Making node specific ESX image 20140209 215145 Installing ESX 20140209 215145 Booting node 20140209 215145 Starting SMCIPMITool 20140209 215147 Disconnecting virtual media 20140209 215147 Attaching virtual media home n utanix orchestrator esx_node_isos esx_install 0 1 20140209 215152 Resetting node 20140209 215153 Powering up node 20140209 215803 esx_installing Figure Nutanix Installer Imaging Problem retry screen Field Installation Guide NOS 3 5 17 Cleaning Up After Installation This procedure describes how to return the Orchestrator VM to a fresh state after an installation Some information persists aft
17. ging problem for one or more of the nodes you can image those nodes one at a time see Imaging a Node on page 25 In the following example a node failed to image successfully because it exceeded the installation timeout period This was because the IPMI port cable was disconnected during installation The progress bar turned red and a message about the problem was written to the log Field Installation Guide NOS 3 5 16 Nutanix Installer Failed Back to Configuration Retry Imaging Failed Nodes 75 0 192 168 20 101 Firstboot Complete 100 0 La WAI Rear k 20140209 210035 Booting node 20140209 210035 Starting SMCIPMITool 192 168 20 102 Starting 0 0 mien ae ae n nii ji 20140209 210044 Disconnecting virtual media 20140209 210045 Attaching virtual media home 192 168 20 103 Firstboot Complete 100 0 A A A A 1 iso 20140209 210050 Resetting node 192 168 20 104 Firstboot Complete 100 0 20140209 210050 Powering up node 20140209 213412 Installation failed exception ol Lows all la 4 V 46020 st File home nutanix orchestrator lib orchestr ator_tools py line 122 in call task File home nutanix orchestrator lib installe Figure Nutanix Installer Imaging Problem progress screen Clicking the Back to Configuration link at the top redisplays the original Nutanix Installer screen updated to show 192 168 20 102 failed to image successfully After fixing the problem click the Image Nod
18. hypervisor nameserver 8 8 8 8 hypervisor password nutanix 4u 192 168 20 30 node position A hypervisor 1p 192 168 20 46 192 168 20 31 node position B hypervisor ip 192 168 20 41 192 168 20 32 node position C hypervisor ip 192 168 20 42 192 168 20 33 node position D I hypervisor ip 192 168 20 43 192 168 20 34 node position A hypervisor 1p 192 168 20 44 192 168 20 35 node position B hypervisor ip 192 168 20 45 192 168 20 36 node position C hypervisor ip 192 168 20 46 132 108 20 37 node position D hypervisor ip 192 168 20 47 Note If the IPMI port address on any of the nodes was not configured successfully using this utility you can configure that address manually by going into the BIOS on the node see Setting IPMI Static IP Address on page 34 Completing Installation Completing a manual installation involves creating a configuration file and then running the Orchestrator installation tool using that configuration with the appropriate NOS and hypervisor image files To complete imaging a cluster manually do the following 1 In a terminal window on the Orchestrator VM go to the Orchestrator directory home nutanix orchestrator and copy generated_orchestrator_cfg txt see Preparing Nutanix Nodes on page 19 as orchestrator config Orchestrator targets the nodes specified in orchestrator config If there is an existing orchestrator config save it before overwriting with the generated_orchestrator_cfg t
19. ktop kana Computer stage _ nutanix s Home orchestrator set_orchestrator_ip_ address Y n vi Terminal Nutanix Orchestrator Figure Nutanix_Installer VM Desktop b In the pop up window click the Run in Terminal button Field Installation Guide NOS 3 5 9 ma stage Computer A nutanix s Home orchestrator set_orchestrator_ip_ address pi Do you want to run Terminal set orchestrator ip address or display its contents set orchestrator ip address is an executable text file Fanin germinat pispiay cancer J an Figure Orchestrator VM Terminal Window c In the Select Action box in the terminal window select Device Configuration Terminal File Edit View Search Terminal Help Select Action DNS configuration lt Tab gt lt Alt Tab gt between elements lt Space gt selects lt F12 gt next screen Figure Orchestrator VM Action Box d In the Select a Device box select ethoO E Terminal File Edit View Search Terminal Help Select A Device lt Tab gt lt ALt Tab gt between elements lt Space gt selects lt F12 gt next screen Figure Orchestrator VM Device Configuration Box e In the Network Configuration box remove the asterisk in the Use DHCP field which is set by default enter appropriate addresses in the Static IP Netmask and Default gateway IP fields and then click the OK button Field Installation
20. llation of a Nutanix product as directed by Nutanix You must not copy NIS or permit anyone else to do so Your unauthorized use copying or distribution of NIS may result in penalties including termination of employment or any and all contracts with Nutanix Nutanix reserves all rights not expressly granted above Figure Orchestrator Releases Screen 5 The Orchestrator or Phoenix files screen for that release appears For Phoenix you must first select a hypervisor before the files screen appears Access or download the desired files from this screen Field Installation Guide NOS 3 5 31 Nutanix Orchestrator b orchestrator bundie 1 0 md5sum bt b orchestrator bundie 1 0 tar gz Figure Orchestrator Files Screen Orchestrator Releases direct direct The following table describes the files for each Orchestrator release Orchestrator Files File Name Orchestrator Release 1 0 orchestrator_bundle_1 0 tar gz orchestrator_bundle_1 0 md5sum txt Phoenix Releases Description This is the compressed tar file of the Orchestrator components It contains the Oracle VirtualBox installer and the components needed for the Orchestrator VM vmdk and vbox and vmx VM description files This is the associated MD5 file to validate against after downloading the Orchestrator bundle Caution Phoenix release 1 0 1 is the earliest supported release do not use a Phoenix ISO image vy from an earlier release The fo
21. ller 5 1 0 update01 1065491 x86_64 iso 5 1U1 VMware VMvisor 9Qaaa9e0daa424a7021c7dc13db b9409 ESX 5 5 Installer 5 5 0 1331820 x86_64 iso 4 Open a Web browser to the IPMI IP address of the node to be imaged 5 Enter the IPMI login credentials in the login screen Field Installation Guide NOS 3 5 25 The default value for both user name and password is ADMIN upper case NUTANIX Please Login Username ADMIN Figure IPMI Console Login Screen The IPMI console main screen appears NUTANIX sanar ion erver 010 001 060 188 Normal Refresh Loaout English Y ADMIN Administrator system Server Health Configuration Remote Control Virtual Media Maintenance Miscellaneous system Summary system Information Firmware Revision 02 33 IP address 010 001 060 188 Firmware Build Time 2013 09 20 BMC MAC address 00 25 90 d8 73 4a O FRU Reading System LAN1 MAC address 00 25 90 d8 75 c8 System LAN2 MAC address 00 25 90 d8 75 c9 Remote Console Preview Refresh Preview Image Figure IPMI Console Screen 6 Select Console Redirection from the Remote Console drop down list of the main menu and then click the Launch Console button Host Identification Server 010 001 060 188 System Server Health Configuration Remote Control Virtual Media Maintenance Miscellaneous Console Redirecti Power Control as Launch SOL Remote Cont
22. llowing table describes the files for each supported Phoenix release Phoenix Files File Name Phoenix Release 1 0 1 phoeni 1 0 1 ESX_NOS 3 1 3 3 xxxxxx iso phoenix 1 0 1_ESX_NOS 3 1 3 3 XXXXxx md5sum txt phoenix 1 0 1_ ESX_NOS 3 5 2 1 xxxxxx iso phoenix 1 0 1_ESX_NOS 3 5 2 1 XXXXxx md5sum txt phoenix 1 0 facory_ KVM _NOS 3 5 2 xxxxxx iSo Description This is the Phoenix ISO image for NOS release 3 1 3 3 when the hypervisor is ESXi The xxxxxx part of the name is replaced by a build number This is the associated MD5 file to validate against after downloading the Phoenix ISO image for NOS release 3 1 3 3 This is the Phoenix ISO image for NOS release 3 5 2 1 when the hypervisor is ESXi This is the associated MD5 file to validate against after downloading Phoenix ISO image for NOS release 3 5 2 1 This is the Phoenix ISO for NOS release 3 5 2 when the hypervisor is KVM Field Installation Guide NOS 3 5 32 Orchestrator Configuration File Cluster information used for imaging the nodes is stored in a configuration file called orchestrator config Contents of the orchestrator config file are either generated automatically see Imaging a Cluster standard method on page 12 or entered manually by the user see Imaging a Cluster manual method on page 19 The following is a sample orchestrator config file for four nodes ipmi_user ADMIN ipmi_password ADMIN hypervisor_netmask hypervisor_ gateway
23. locks are discovered a Inthe top line of the IPMI IP column enter a starting IP address The entered address is assigned to the IPMI port of the first node and consecutive IP addresses starting from the entered address are assigned automatically to the remaining nodes Discovered nodes are sorted first by block ID and then by position so IP assignments are sequential If you do not want all addresses to be consecutive you can change the IP address for specific nodes by updating the address in the appropriate fields for those nodes Note Automatic assignment is not used for addresses ending in 0 1 254 or 255 because such addresses are commonly reserved by network administrators b Repeat the previous step for the IP addresses in the Hypervisor IP column Inthe bottom section of the screen do the following a In the Phoenix ISO Image field select the Phoenix ISO image you downloaded previously from the pull down list see Preparing Installation Environment on page 7 Note If you do not see the desired Phoenix ISO image or hypervisor ISO image in the next step in the list click the Refresh button to display the current list of available images b In the Hypervisor ISO Image field select the hypervisor ISO image you downloaded previously from the pull down list see Preparing Installation Environment on page 7 When all the fields are correct click the Run Installation button The imaging process begins No
24. lowing command to start the installation orchestrator orchestrator config esx esx_iso filename phoenix phoenix_iso filename Replace esx_iso_filename with the full absolute path name of the target ESXi ISO image file and phoenix_iso_filename with the full absolute path name of the target Phoenix ISO image file Monitor the progress of the NOS and ESXi installation from the Orchestrator output and or a VGA monitor connected to the physical nodes The entire installation process takes approximately 45 minutes Installation runs in parallel across all nodes However when there are more than eight nodes in the cluster installation is 45 minutes per block of eight nodes The following is sample output during the installation process A status message is printed every 20 seconds indicating the number of nodes in each state The sum of numbers on a line should be the total number of nodes nutanix localhost orchestrator orchestrator thor config esx VMware VMvisor Installer 5 1 0 update01 1065491 x86 64 iso phoenix phoenix_esx_dev_orchestrator_3 5 2 01312014 iso Detecting node classes Processing ESX iso Processing phoenix iso Installation in progress Will report aggregate node status every 20 seconds Node status starting 4 Node status starting 4 Node status esx_installing 1 starting 3 Node status esx rebooting 1 esx_installing 3 Node status esx_installed 3 svm_download 1 Node status phoenix c
25. luster is large Wait for the discovery process to complete before proceeding Nutanix Installer PMI Netmask Hypervisor Netmask IPM Gateway Hypervisor Gateway IPMI Username Hypervisor Name Serve PMI Password 4k Block Serial Node IPMI IP Hypervisor IP 1 SMSFOOBAR1234 A D Phoenix ISO Image hoenix 01282014 is Hypervisor ISO Image VMware VMvisor Installer 5 0 0 update02 914586 x86_64 iso v Refresh Run Installation Figure Nutanix Installer Full Screen 2 In the top section of the screen enter appropriate values in the indicated fields Note The parameters in this section are global and will apply to all the discovered nodes Field Installation Guide NOS 3 5 13 a IPMI Netmask Enter the IPMI netmask value b IIPMI Gateway Enter an IP address for the gateway c IPMI Username Enter the IPMI user name The default user name is ADMIN d IIPMI Password Enter the IPMI password The default password is ADMIN e Hypervisor Netmask Enter the hypervisor netmask value f Hypervisor Gateway Enter an IP address for the gateway g Hypervisor Name Server Enter the IP address of the DNS name server In the middle section of the screen do the following The middle section includes columns for the block node IPMI IP address and hypervisor IP address A section is displayed for each discovered block with lines for each node in that block The size of the middle section varies and can be quite large when many b
26. mages Hypervisor ISO Images File Name MD5 Sum VMware VMvisor fa6a00a3f0dd0cd1a677f69a23661 1e2 Installer 5 0 0 update02 914586 x86_64 iso VMware VMvisor 2cd15e433aaacc7638c706e013dd673a Installer 5 1 0 update01 1065491 x86_64 iso VMware VMvisor 9Qaaa9e0daa424a7021c7dc13db 7b9409 Installer 5 5 0 1331820 x86_64 iso Hypervisor Version ESXi 5 0U2 ESXi 5 1U1 ESXi 5 5 Field Installation Guide NOS 3 5 11 Imaging a Cluster standard method This procedure describes how to install the NOS Controller VM and a selected hypervisor on all the new nodes in a cluster from an ISO image on a workstation Before you begin e Physically install the Nutanix cluster at your site See the Physical Installation Guide for your model type for installation instructions e Set up the installation environment see Preparing Installation Environment on page 7 e Have ready the appropriate IP address and netmask information needed for installation You can use the following table to record the information prior to installation Note The Orchestrator IP address set previously assumed a public network in order to download the approrpriate files If you are imaging the cluster on a different typically private network in which the current address is no longer correct repeat step15 in Preparing Installation Environment on page 7 to configure a new static IP address for the Orchestrator VM Installation Parameter Values P
27. nd writes their names and IPv6 addresses to the file discovered_nodes txt located in orchestrator Field Installation Guide NOS 3 5 19 The discovery process typically takes a few minutes depending on the size of the network For example it took about 10 minutes to discover 60 nodes on a crowded VLAN nutanix nutanix installer orchestrator File Edit View Search Terminal Help nutanix nutanix installer orchestrator discovery py discover INFO Discovering nodes Expect this to take some time Discovered block 145M36030032 node B at fe80 20c 29ff fee1l 15c2 eth0 Discovered block 145M36030032 node A at fe6O 20c 29TT fede 22fcsetho Discovered block 145M36030032 node C at feB80 20c 29ff fee3 6dddk etho Discovered block 145M36030036 node B at feB80 20c 29ff fef4 6c51l etho Discovered block 145M36030036 node D at fe80 26c 29ff fe9c ezesetha Discovered block 145436030032 node D at fe80 20c 29ff fe18 a71d etho Discovered block 145M36030036 node C at fe80 20c 29ff febf afba ethd Discovered block 145436030036 node A at fe80 20c 29fT fe36 beeesetho INFO Wrote nodes to discovered nodes txt nutanix nutanix installer orchestrator cat discovered nodes txt 145M36030032 NX 3060 A Te8O 20c 29TT Tede 22Tc etho 145M36030032 NX 3060 B feB80 20c 29ff fee1l 15c2 eth 145M36030032 NX 3060 C feB0 20c 29ff fee3 6ddd eth 145M36030032 NX 3060 D fes0 260c 29ff fe18 a71d etho A B C 145M36030036 NX 30600 fe80 20c 29ff fe36 beee e
28. nd then review and update if needed the output file b Run the IPMI address configuration command 3 Create a configuration file that provides IPMI and hypervisor information for each node The discovery py utility produces an initial version of this file 4 Invoke Orchestrator from the command line using the desired Phoenix and hypervisor ISO image files e Imaging a Node on page 25 1 Download the Phoenix and hypervisor ISO image files to a workstation 2 Sign into the IPMI web console for that node attach the hypervisor ISO image file provide required node information and then restart the node 3 Repeat step 2 for the Phoenix ISO image file Field installation can be used to cleanly install new nodes blocks in a cluster or to install a different hypervisor on a single node lt should not be used to upgrade the hypervisor or switch hypervisors of nodes in an existing cluster The following table lists the hypervisors that can be installed through this method Field Installation Guide NOS 3 5 5 Supported Hypervisors Product Series ESXi 5 0U2 ESXi 5 1U1 ESX 5 5 NX 1000 NX 2000 NX 3000 NX 3050 NX 6000 NX 7000 Field Installation Guide NOS 3 5 6 Preparing Installation Environment Imaging a cluster in the field requires first installing certain tools and setting the environment to run those tools Installation is performed from a workstation laptop or desktop machine with access to the IPMI interfaces
29. next screen Virtual Media Record Macro Options UserList Capture PowerControl Exit VMware ESXi 5 1 0 Installer Welcome to the VMware ESXi 5 1 0 Installation VMware ESXi 5 1 0 installs on most systems but only systems on VMware s Compatibility Guide are supported Consult the VMware Compatibility Guide at http uuu ymuare con resources compat ibility Select the operation to perforn Figure ESXi Installation Screen 12 In the Select a Disk page select the SATADOM as the storage device click Continue and then click OK in the confirmation window Select a Disk to Install or Upgrade Contains a YMFS partition Storage Device INTEL SSDSC2BA40 naa 50015178f361aee9 INTEL SSDSC2BA40 naa 50015178f3619f36 ST91000640NS naa 5000c500502f 394a ST91000640NS naa 5000c50050315d07 T91000640NS naa 5000c50050317673 Figure ESXi Device Selection Screen Capacity 372 61 GiB 372 61 GiB 931 51 GiB 931 51 GiB 931 51 GiB Field Installation Guide NOS 3 5 27 13 1n the keyboard layout screen select a layout such as US Default and then click Continue 14 1n the root password screen enter nutanix 4u as the root password Note The root password must be nutanix 4u or the installation will fail 15 Review the information on the Install Confirm screen and then click Install Conf irm Install The installer is configured to ESXi 5 1 0 on t10 ATA 1668 SATA Flash Drive CO112011111 000000026
30. ng action ipconfig on SVMs localhos t 2014 02 10 17 39 43 INFO cluster 494 IP configuration succeeded on feB80 20c 29f f fede 22fcwetho 2014 62 10 17 39 43 INFO cluster 494 IP configuration succeeded on fe8 20c 29T 2014 62 10 17 39 43 INFO cluster 494 IP configuration succeeded on fe80 28c 29f 2014 62 10 17 39 43 INFO cluster 494 IP configuration succeeded on fe86 26c 29f 2014 02 10 17 39 43 INFO cluster 494 IP configuration succeeded on feg0 20c 29f f Tees 6dddsetho 2014 02 10 17 39 43 INFO cluster 494 IP configuration succeeded on fe80 20c 29f f fe36 beeexetho 2014 62 10 17 39 43 INFO cluster 494 IP configuration succeeded on fes 20c 29T f feel 15c2 etho 2014 02 10 17 39 43 INFO cluster 494 IP configuration succeeded on fe8 20c 29T f fe9c edeseth 2014 02 10 17 39 43 INFO cluster 1463 Succes INFO Configured IPs INFO Wrote config template to generated orchestrator cfg txt nutanixenutanix installer orchestrator Discovery py Command Options Option Name Description ipmi ip start Sets the starting IP address that will be assigned to the IPMI interfaces Addresses increment by one for each additional node This option is required Ipmi_netmask Sets the IPMI netmask value This is a global setting across all nodes This option is required Ipmi_gateway Specifies the IP address of the router for the IPMI range Ipmi user Specifies the user name for the IPMI console The default user name is ADMIN
31. omplete 1 imaging 3 Node status phoenix complete 3 firstboot complete 1 Field Installation Guide NOS 3 5 23 Installation was successful on all nodes Run time 38 2 minutes nutanix localhost orchestrator 7 When the installation completes and Orchestrator exits wait for one final reboot of ESXi before using the nodes This final reboot should occur within a few minutes after Orchestrator exits 8 After a successful installation configure the cluster as described in the Nutanix Setup Guide 9 If an error occurs during installation do the following a Check the home nutanix orchestrator log directory for HTTP and node level error logs and make adjustments as indicated in the logs b If you are unable to correct the problem s re run Orchestrator for just the failed nodes by editing the Orchestrator configuration file and removing the node information for the nodes that were successfully installed You can comment out a line by starting it with a sign Field Installation Guide NOS 3 5 24 Imaging a Node This procedure describes how to install the NOS Controller VM and selected hypervisor on a new or replacement node from an ISO image on a workstation laptop or desktop machine Before you begin e If you are adding a new node physically install that node at your site See the Physical Installation Guide for your model type for installation instructions Imaging a new or replacement node can be done
32. or is the name of the multi node installation tool Each Orchestrator bundle file includes the following e Nutanix Orchestrator installation folder VM descriptions in vbox and vmx format and two vmdk files a small one and a large flat one The Nutanix Orchestrator VM is used to perform cluster imaging e Oracle VM VirtualBox installer exe and dmg VirtualBox installers for Windows and Mac OS respectively Oracle VM VirtualBox is a free open source tool used to create a virtualized environment on the workstation 4 Go to the copy location on the workstation and extract the contents of the tar file tar C output directory xzvf orchestrator_bundle_version tar gz If you have a Windows machine that does not support the tar command use the 7 Zip utility Open orchestrator_bundle_version tar gz from the 7 Zip GUI and extract the contents to any convenient location Field Installation Guide NOS 3 5 7 5 Using the Oracle VM VirtualBox installer provided in the bundle install Oracle VM VirtualBox using the default options See the Oracle VM VirtualBox User Manual for installation and start up instructions https www virtualbox org wiki Documentation 6 Create a new folder called VirtualBox VMs in your home directory On a Windows system this is typically C Users user_name VirtualBox VMs 7 From the location of the extracted Orchestrator bundle copy the Orchestrator_VM folder to the VirtualBox VMs folder that you
33. ottom of screen to save any changes and exit the settings screen Field Installation Guide NOS 3 5 8 E General Network System E Display Adapter 1 Adapter 2 Adapter 3 Adapter 4 Storage Y Enable Network Adapter EP Network Name Realtek PCIe GBE Famiy Controler OO IV gt Serial Ports D Advanced USB Shared Folders ok J cams J ee Figure VirtualBox Network Settings Screen 13 1n the left column of the main screen select Nutanix_Installer and click Start The Orchestrator VM console launches and the VM operating system boots 14 At the login screen login as the Nutanix user with the password nutanix 4u The Orchestrator VM desktop appears after it loads 15 Open a terminal session and run the ifconfig command to determine if the Orchestrator VM was able to get an IP address from the DHCP server If the Orchestrator VM has a valid IP address skip to the next step Otherwise configure a static IP as follows Note Normally the Orchestrator VM needs to be on a public network in order to copy selected ISO files to the Orchestrator VM in the next two steps This might require setting a static IP address now and setting it again when the workstation is on a different typically private network for the installation see maging a Cluster standard method on page 12 a Double click the set orchestrator ip address icon on the Orchestrator VM des
34. out 20 minutes 5 In the Virtual Media window click Disconnect next to CD Media 6 At the restart prompt in the console type Y to restart the node Virtual Media Record Macro Options UserList Capture Power Control Exit INFO Mounting bootbanks INFO Unpacking ESX state file INFO Customizing ESX instance INFO Setting up SSH keys INFO Copying firstboot scripts into altbootbank INFO Customizing esx conf INFO Applying rc local changes INFO Generating sumboot tar gz from template INFO Applying PCIe passthu settings to esx conf and the SUMs vmx file INFO Creating Nutanix configuration files INFO Copying SSH keys INFO Deploying customizations INFO Copying VIBS to be installed on ESXi firstboot INFO Cleaning up INFO Imaging thread hypervisor has completed successfully INFO Imaging process completed successfullyt The installer has completed the imaging process however there will be tasks executed during first boot which may take time to complete especially if this block is a NX 2000 derivative Please unplug any virtual or physical media and reboot this machine Please enter Y to reboot now Y N _ The node restarts with the new image After the node starts additional configuration tasks run and then the host restarts again During this time the host name is installing please be patient Wait approximately 20 minutes until this stage completes before accessing the node Warning Do not resta
35. press Enter and then enter the IP address for the node s network gateway in the pop up window 11 When all the field entries are correct press the F4 key to save the settings and exit the BIOS setup mode Field Installation Guide NOS 3 5 35
36. re Enter the following command to configure the IPMI port addresses on the discovered nodes discovery py ipmi_ip start ip start number ipmi_netmask netmask value configure This command configures the IPMI interface IP addresses for all the nodes discovered in the previous step lt starts at the jo start number address and increments by one for each node The netmask_value sets the netmask for all the IPMI interfaces IP addresses are assigned based on the node order listed IN discovered_nodes txt This means the IP addresses will be assigned in order of block ID and node position You can specify additional options as needed The following table describes the options available for this command Note This command assumes the input file is named discovered_nodes txt If the f option was used in the previous step to change the output file name you must specify that file by adding an f file_name option to this command Field Installation Guide NOS 3 5 20 nutanix nutanix installer orchestrator File Edit View Search Terminal Help nutanix nutanix installer orchestrator discovery py ipmi ip start 192 168 20 30 ipmi netmask 255 255 255 0 ipmi gateway 192 168 20 1 hypervisor ip start 192 168 20 40 hypervisor netmask 255 255 255 0 hypervisor gateway 192 168 26 1 configure 2014 62 19 17 39 43 WARNING cluster 1399 Executing operation ipconfig on localh ost 2014 62 16 17 39 43 INFO cluster 1461 Executi
37. rol Console Redirection Es Launch Console Figure IPMI Console Menu 7 Select Virtual Storage from the Virtual Media drop down list of the remote console main menu Virtual Media Record Macro Options UserList Capture Power Control Exit Virtual Storage Virtual Keyboard Figure IPMI Remote Console Menu Virtual Media 8 Click the CDROM amp ISO tab in the Virtual Storage display and then select ISO File from the Logical Drive Type field drop down list Field Installation Guide NOS 3 5 26 Virtual Storage 1 2 r2 USB Floppy amp Flash CDROM amp ISO Settings for Device2 Logical Drive Type Image File Name and Full Path No Select z No Select ISO File Web ISO Connection Status History Plug in OK gt Figure IPMI Virtual Storage Screen 9 In the browse window go to where the hypervisor ISO image was downloaded select that file and then click the Open button 10 1n the remote console main menu select Set Power Reset in the Power Control drop down list This causes the system to reboot using the selected hypervisor image Virtual Media Record Macro Options User List Capture Power Control Exit Set Power On Set Power Off Software Shutdown Set Power Reset Set Power Reset Figure IPMI Remote Console Menu Power Control 11 Click Continue Enter at the installation screen and then accept the end user license agreement on the
38. rt the host until the configuration is complete Field Installation Guide NOS 3 5 30 Orchestrator Portal The Orchestrator portal site provides access to many of the files required to do a field installation Portal Access Nutanix maintains a site where you can download Nutanix product releases To access the Orchestrator portal on this site do the following 1 open a web browser and enter http releases nutanix com 2 This displays a login page Enter your Nutanix account or partner portal credentials to access the site 3 The Current NOS Releases page appears In the pull down list next to your name upper right select Orchestrator to download Orchestrator related files or Phoenix to download Phoenix related files NOS Current NOS Releases nen phoenix 1 0 congo 3 5 2 1 congo 3 5 2 Logout congo 3 5 1 congo 3 5 0 1 congo 3 5 congo 3 1 3 3 congo 3 1 3 2 congo 3 1 3 1 congo 3 1 3 congo 3 1 2 congo 3 1 1 congo 3 1 congo 3 0 4 2 Figure NOS Releases Screen 4 The Orchestrator or Phoenix releases screen appears Click on the target release link Nutanix Orchestrator orchestrator 1 0 Restricted Download Authorization Required The links shown here are for use solely as authorized by Nutanix By clicking any link you represent that Nutanix has authorized you to download Nutanix Installation Software NIS NIS may be used only by parties authorized by Nutanix and only in connection with the insta
39. stalls the Nutanix Operating System NOS Controller VM and the KVM hypervisor at the factory before shipping each node to a customer To use a different hypervisor ESXi nodes must be re imaged in the field This guide provides step by step instructions on how to re image nodes install a hypervisor and then the NOS Controller VM after they have been physically installed at a site Note Only Nutanix sales engineers support engineers and partners are authorized to perform a field installation A field installation includes the following steps e Imaging a Cluster standard method on page 12 1 Set up the installation environment as follows a Connect the Ethernet ports on the nodes to a switch b Download Orchestrator multi node installation tool and Phoenix Nutanix Installer ISO image files to a workstation Also acquire an ESXi installer from the customer and download it to the workstation c Install Oracle VM VirtualBox on the workstation 2 Open the Orchestrator GUI on the workstation and configure the following a Enter hypervisor and IPMI address and credential information b Select the Phoenix and hypervisor ISO image files to use c Start the imaging process and monitor progress e Imaging a Cluster manual method on page 19 1 Set up the installation environment same as above 2 Invoke the Orchestrator discovery py utility from the command line to do the following a Run the node discovery command a
40. t per node installation timeout is 30 minutes so you can expect all the nodes in each run of up to eight nodes to finish successfully or encounter a problem in that amount of time To correct this problem see Fixing Imaging Problems on page 16 Fixing IPMI Configuration Problems When the IPMI port configuration fails for one or more nodes in the cluster the installation process stops before imaging any of the nodes Orchestrator will not go to the imaging step after an IPMI port configuration failure but it will try to configure the IPMI port address on all nodes before stopping The installation screen reappears with a red check next to the IPMI port address field for any node that was not configured successfully To correct this problem do the following 1 Review the displayed addresses for the failed nodes determine if that address is valid and change the IP address in that field if necessary Hovering the cursor over the address displays a pop up message see figure with troubleshooting information This can help you diagnose and correct the problem In addition see the service log file in home nutanix orchestrator log for more detailed information Field Installation Guide NOS 3 5 15 2 When you have corrected all the problems and are ready to try again click the Configure IPMI button at the bottom of the screen Repeat the preceding steps as necessary to fix all the IPMI configuration errors When all nodes ha
41. ted in the Hyper V host shell The information is displayed as output from a command or in a log file Target Username Password Nutanix Controller VM admin admin ESXi host root nutanix 4u ESXi host root nutanix 4u KVM host root nutanix 4u Field Installation Guide NOS 3 5 2 Interface Target Username Password SSH client Nutanix Controller VM nutanix nutanix 4u IPMI web interface or ipmitool Nutanix node ADMIN ADMIN IPMI web interface or ipmitool Nutanix node NX 3000 admin admin Version Last modified February 12 2014 2014 02 12 19 22 GMT 08 00 Field Installation Guide NOS 3 5 3 Contents UE WA sdeadenstecseusunddans 5 Preparing Installation Environment smumammsnunu naamua nuaanusu zaanza nuauna 7 Imaging a Cluster Standard Mmetho00 cooncccncccnnnconncccocconarconcnonanonannnnanos 12 Fixing IPM Configuration Problems und cla ein 15 Fixing Imaging Problems cocciia is di di ii dai dt 16 Cleaning Ub After Install a 18 Imaging a Cluster manual metho00 ooncconnconnnionnnincnccnancnnanenannenannnnanens 19 Preparing Nutan NS AN aaa 19 Completa SA MAUA a A a 22 imaging a NO isc 25 SANGA ADS KAA ad 25 Installing the Controller ia 28 Orchestrator PA aiii cn as 31 Orchestrator Configuration File ccccccsssecsssecesseceesecceseeeeneenensesones 33 Setting IPMI Static IP AddresS coonccconccoccnccocncconnccnonnnconnncnannrnnnnnnnannnos 34 Overview Nutanix in
42. th 145M36030036 NX 3060 feB0 20c 29ff fef4 6c51 eth 145M36030036 NX 3066 C fe80 20c 29ff febf afb8 etho 145M36030036 NX 3060 D fes0 206c 29ff fe9c e3e etho nutanix nutanix installer orchestrator The output file includes a line for each node containing node information block ID model type node position IPv6 address and IPv6 interface The lines are sorted first by block ID and then by node position The content of this file is used as input in the next step e You can add or edit entries in this file if necessary However make sure the changes conform to the block ID and node position sort ordering Otherwise the IPMI IP address assignments will not be consecutive In addition do not leave blank lines in the file e If you suspect nodes are being missed you can extend the number of retries by adding an n number of retries option to the command This parameter sets the number of consecutive retries discovery py must run without finding a new node The retries number is set to 10 by default e You can extend the browsing timeout to account for a congested network by adding a t time in seconds option to the command This sets the maximum wait time for a browsing call to return before advancing to the next retry The default value is 40 seconds e You can change the name of the output file by adding an f file name option to the command Note The discovery py command syntax is discovery py options discover configu
43. ve green check marks in the IPMI address column click the Image Nodes button at the bottom of the screen to begin the imaging step If you cannot fix the IPMI configuration problem for one or more of the nodes you can bypass those nodes and continue to the imaging step for the other nodes by clicking the Proceed button In this case you must configure the IPMI port address manually for each bypassed node see Setting IPMI Static IP Address on page 34 Configure IPMI Image Nodes Figure Nutanix Installer IPMI Configuration Error Fixing Imaging Problems When imaging fails for one or more nodes in the cluster the installation screen reappears with a red check next to the hypervisor address field for any node that was not imaged successfully To correct this problem do the following 1 Review the displayed addresses for the failed nodes determine if that address is valid and change the IP address in that field if necessary Hovering the cursor over the address displays a pop up message with troubleshooting information This can help you diagnosis and correct the problem When you have corrected the problems and are ready to try again click the Proceed button at the bottom of the screen The GUI displays dynamic status messages and a progress bar for each node during imaging see Imaging a Cluster standard method on page 12 Repeat the preceding steps as necessary to fix all the imaging errors If you cannot fix the ima
44. xt contents 2 Open orchestrator config for editing using an editor of choice review the entries and update them if necessary Syntax is one parameter per line in the form parameter_name value no spaces The top section is for global variables Field Installation Guide NOS 3 5 22 e ipmi_user This is the IPMI user name e ipmi_password This is the IPMI user password e hypervisor_netmask This is the hypervisor netmask value e hypervisor_gateway This is the gateway IP address for the hypervisor e hypervisor_nameserver This is the name server IP address for the hypervisor e hypervisor_password This is the hypervisor password which must be nutanix 4u The following section is for node specific parameters The following lines should be present for each node to be imaged e The IPMI IP address is on the first line e The hypervisor_ip parameter is indented on the next line e The generated_orchestrator_cfg txt contents may include an optional indented line for the node position parameter When all the values are correct save the file Note See Orchestrator Configuration File on page 33 for a sample orchestrator config file Verify the desired Phoenix and hypervisor ISO image files are in the home nutanix orchestrator directory see Preparing Installation Environment on page 7 Enter the following command to stop the Orchestrator service sudo service orchestrator service stop Enter the fol

Download Pdf Manuals

image

Related Search

Related Contents

Kenmore Elite 17 cu. ft. Upright Freezer - White ENERGY STAR Owner's Manual  First release - EAGLE Portal  Hardinge GD210LP Rotary Table Indexer User Manual    Quatech DSC-300 Network Card User Manual  Magazine municipal Epinay-en-scène-janvier 2011  Samsung WB30F 用户手册    GE Feb-96 User's Manual  Grundig AMIRA26HDBLK CRT Television User Manual  

Copyright © All rights reserved.
Failed to retrieve file