Home

OpenStack Deployment Manual - Support

image

Contents

1. A value of 0 for the Journal size is invalid and does not cause a Ceph default size to be used The default value of Journal on partitionisno The Shared journal device path must be set if a shared device is used for all the OSD journals in the category or node for which this screen applies The path is unset by default which means it is not used by default The Shared journal size in MiB can be set For n OSDs each of size x MiB the value of Shared journal sizeisn x x That is its value is the sum of the sizes of all the individual OSD journals that are kept on the shared journal device If it is used then Bright Computing Inc 3 3 Checking And Getting Familiar With Ceph Items After cm ceph setup 49 The value of Shared journal size is used to automati cally generate the disk layout setup of the individual OSD jour nals A value of 0 for the Journal size is invalid and does not cause a Ceph default size to be used The Shared journal size value is unset by default The Back option can be used after accessing the editing screen to return to the Ceph OSDs configuration screen figure 3 6 Successful Completion Of The Ceph Installation Ceph has been configured successfully For details see log file var lag cm ceph setup log After booting the nodes configured with Ceph roles it will take some time to create and start USD and Monitor services on those nodes The progress can be monitored with
2. Edit Edit monitors Remove Remove monitors Mext Proceed to US0s lt Back gt Figure 3 4 Ceph Installation Monitors Configuration In this screen e Ceph Monitors can be added to nodes or categories e Existing Ceph Monitors can be edited or removed figure 3 5 from nodes or categories e The OSD configuration screen can be reached after making changes if any to the Ceph Monitor configuration Typically in a first run the head node has a Ceph Monitor added to it Editing Ceph Monitors Edit Monitor role for node bright 0 Bootstrap is used to specify whether node s running this Monitor service must be up during the setup process Possible values are auto true and false It is recommended to use the default value auto Data path is used to specify data path for the Monitor By default its value is var lib ceph mon cluster hostname where cluster is the name of the Ceph instance usually ceph and hostname is the It is recommended to use the default value Bootstrap Els Data path MEIKA EI TAn AAN eee Saar lt Back gt Figure 3 5 Ceph Installation Monitors Editing Bootstrap And Data Path The Edit option in figure 3 4 opens up a screen figure 3 5 that allows the editing of existing or newly added Ceph Monitors for a node or category e The bootstrap option can be set The option configures initializa tion of the maps on the Ceph Monitors services prior to the actual setup
3. Server Script s3gw fcgi Server Socket tmp radosgw sock Enable Keystone Authentication yes Keystone Accepted Roles Keystone Revocation Interval 600 Keystone Tokens Cache Size 500 NSS DB Path var lib ceph nss For example setting enablekeystoneauthentication to yes and committing it makes RADOS GW services available to OpenStack in stances if they have already been initialized section 3 4 1 to work with Bright Cluster Manager RADOS GW Properties In cmgui RADOS GW properties can be accessed in cmgui by selecting the device from the resource tree then selecting the Ceph subtab for that device then ticking the Rados Gateway panel then clicking on the Advanced button figure 3 9 File Monitoring Filter View Bookmarks Help RESOURCES node001 E openstacks Y 7 My Clusters lt ment Network Setup Static Routes FS Mounts FS Exports Roles Ceph Disk Setup RAID Setup v openstack6 a gt y Switches gt E Networks Monitor 4 OSD 4 talca osdO Advance gt C Node Categ gt 7 Head Nodes gt y Racks gt 3 Chassis v Rados Gateway gt C Virtual SMP Keystone Authentication enabled Advanced v Nodes w E node001 E a gt gt 3 Cloud Nodes i gt 3 MIC Nodes A Revert Save ee Figure 3 9 Accessing The RADOS GW Properties Button In cmgui This brings up the RADOS GW Advanced Role Settings win dow figure 3 10 which allows RADOS GW properties to be managed For example t
4. Specify a name for the VXLAN host network vxlanhostnet Figure 2 40 Setting The Name Of The Network For User Networks If a network to host the user networks is chosen in section 2 2 11 then a screen similar to figure 2 40 is displayed This lets the administrator set the name of the hosting network for user networks 2 2 13 Setting The Base Address Of The Hosting Network For User Networks What s the base address for the new VXLAN host network 10 164 0 01 Figure 2 41 Setting The Base Address Of The Network For User Net works If the network name for the network that hosts the user networks is cho sen in section 2 2 12 then a screen similar to figure 2 41 is displayed This lets the administrator set the base address of the hosting network for user networks Bright Computing Inc 2 2 Installation Of OpenStack From The Shell 2 2 14 Setting The Number Of Netmask Bits Of The Hosting Network For User Networks What s the number of netmask bits for the new WXLAN host network Figure 2 42 Setting The Number Of Netmask Bits Of The Network For User Networks If the base address for the network that hosts the user networks is set in section 2 2 13 then a screen similar to figure 2 42 is displayed This lets the administrator set the number of netmask bits of the hosting network for user networks 2 2 15 Enabling Support For Bright managed Instances Do you want to enable support for Bright manag
5. The MySQL root password screen figure 2 28 prompts for the existing root password to MySQL to be entered while the OpenStack admin pass word screen figure 2 29 prompts for a password to be entered and then re entered for the soon to be created admin user in OpenStack Bright Computing Inc 29 30 OpenStack Installation 2 2 5 Reboot After Configuration Once the configuration and deployment of OpenStack is completed all compute nodes participating in the OpenStack deployment should be rebooted to update their software images and network interfaces Do you want the cm openstack setup to reboot the nodes as part of the deployment process or do you want to reboot the nodes yourself manually Do you want the setup to reboot compute nodes after completion no Figure 2 30 Reboot After Configuration Screen A screen is shown asking if the compute host nodes that is the nodes used to host the virtual nodes should be re installed after configuration figure 2 30 A re install is usually best for reasons discussed on page 6 for the cmgui installation wizard equivalent of this screen option 2 2 6 Ceph Options Background notes on the Ceph options can be read on page 10 in the section on the cmgui wizard configuration of Ceph options This section 2 2 6 covers the Ncurses cm openstack setup wizard configuration of Ceph options Glance Image Storage With Ceph OpenStack Glance needs a place to store the images The
6. recom mended then The Virtual node category is given a default value of virtual nodes This is a sensible setting for a new deploy ment The Base category can be selected This is the category from which the new virtual node category is derived Cat egory settings are copied over from the base category to the virtual node category The only category choice for the Base category in a newly installed cluster is default Some changes are then made to the category settings in order to make virtual nodes in that category run as virtual instances Bright Computing Inc 2 1 Installation Of OpenStack From cmgui One of the changes that needs to be made to the category settings for a virtual node is that a software image must be set The following options are offered Create a new software image This option is recom mended for a new installation Choosing this option presents the following suboptions The Software image is given a default value of virtual node image This is a sensible setting for a new deployment The Base software image can be selected This is the software image from which the new virtual node software image is derived In a newly installed cluster the only base software image choice is default image Use existing software image An existing software im age can be set The only value for the existing image in a newly configured cluster is default image Use software image from ca
7. yes Network isolation YALAN Userinstances access tointernalnet yes Support for Bright managed instances yes Floating IPs yes Compute nodes nodeOOl Network node node l Automatically deploying the configuration will take several minutes during which a log window will be shown displaying the progress of the deployment Press Deploy to start deployment Press Cancel to close the wizard no changes will be introduced Press Save Configuration to get the OpenStack deployment configuration as XML m show Configuration save Configuration Previous Deploy Figure 2 22 Summary Screen Viewing And Saving The Configuration The summary screen figure 2 22 gives a summary of the configuration The configuration can be changed in cmgui if the administrator goes back Bright Computing Inc 25 26 OpenStack Installation through the screens to adjust settings The full configuration is kept in an XML file which can be viewed by clicking on the Show Configuration button The resulting read only view is shown in figure 2 23 x OpenStack configuration d lt propeties gt ai are booth odesU pon Completion true lt re booth odes pon Completion gt lt mysqlRootP assword gt system lt mysqlRootP assword gt lt mainAdminPassword system lt mainAdminPassword gt base Category ForNewly Created OpenStack Cate g ories default base Cate g oryForNewly Created OpenSta ckCategories gt lt backe
8. 6 Summary amp Deployment VXLAN Configuration Please specify the VXLAN network ID VNID range which will be used for individual isolated user networks Each such new OpenStack user network created as part of a tenant will be automatically assigned a VNID from within this range Therefore the wider the range the more networks will be able to exist at the same time The range should be composed of consecutive numbers e g 10 3000 The default VXLAN ranges should be sufficient in most cases VALAN Range start 1 VXLAN Range end 50000 VXLAN networking makes use of multicast for certain functionality Therefore a specific multicast address has to be dedicated to VXLAN networking The default multicast IP address which will be used by the wizard is 224 0 0 1 If there are any other applications in the cluster which already use this IP please refer to the OpenStack Administrator Manual on how to change it to a different IP When using VXLANs to isolate user networks an IP network is needed to host the VXLANs Please specify below a network that can be used as the VXLAN host network e Create new Use existing Wo available internal networks The internalnet cannot be used for VXLAN host Base address 10 161 0 0 Netmask bits 16 Name velanhostnet Previous Figure 2 14 VXLAN Configuration Screen The VXLAN screen figure 2 14 shows configuration options for the VXLAN network if VXLAN has been chosen a
9. Bright Cluster Manager before the wiz ard is run then the Ceph configuration screen figure 2 7 is displayed Choosing any of the Ceph options requires that Ceph be pre installed This is normally done with the cm ceph setup script section 3 2 Ceph is an object based distributed parallel filesystem with self managing and self healing features Object based means it handles each item natively as an object along with meta data for that item Ceph is a drop in replacement for Swift storage which is the reference OpenStack object storage software project The administrator can decide on e Using Ceph for volume storage instead of on NFS shares This is then instead of using the OpenStack Cinder reference project for the implementation of volume storage 1 An advantage of Ceph and one of the reasons for its popularity in OpenStack imple mentations is that it supports volume snapshots in OpenStack Snapshotting is the ability to take a copy of a chosen storage and is normally taken to mean using copy on write COW technology More generally assuming enough storage is available non COW technology can also be used to make a snapshot despite its relative wastefulness In contrast to Ceph the reference Cinder implementation displays an error if attempting to use the snapshot feature due to its NFS driver The administrator should understand that root or ephemeral storage are concepts that Bright Computing Inc 2 1 Installation
10. and ensure that all the non head nodes partici pating in the OpenStack deployment are attached and associ ated accordingly The VXLAN network runs over an IP network It should there fore have its own IP range and each node on that network should have an IP address By default a network range of 10 161 0 0 16 is suggested in the VXLAN configuration screen section 2 1 13 figure 2 14 The VXLAN network can run over a dedicated physical net work but it can also run over an alias interface on top of an existing internal network interface The choice is up to the ad ministrator It is possible to deploy OpenStack without VXLAN overlay networks if user instances are given access to the internal net work Care must then be taken to avoid IP addressing conflicts e Changing the hostname of a node after OpenStack has been de ployed is a best practice but does involve some manual steps So it is recommended to change the hostnames before running the wiz ard For example to set up a network node a single regular node which is to be the network node of the deployment should be chosen and renamed to say networknode Bright Computing Inc 2 1 Installation Of OpenStack From cmgui it should then be restarted the Setup Wizardis then run from the head node When the wizard reaches the network node selection screen section 2 1 5 figure 2 6 net worknode can be selected as the network node e When all
11. by the instances from each other This means that the instances at tached to one private network will always avoid network conflicts with Bright Computing Inc 15 OpenStack Installation the instances of another other network even if using the same IP address ranges Bright Cluster Manager supports two virtual LAN technologies e VLAN VLAN technology tags Ethernet frames as belonging to a particular VLAN However it requires manual configuration of the VLAN IDs in the switches and also the number of IDs available is limited to 4094 e VXLAN VXLAN technology has more overhead per packet than VLANs because it adds a larger ID tag and also because it encap sulates layer 2 frames within layer 3 IP packets However unlike with VLANs configuration of the VXLAN IDs happens automati cally and the number of IDs available is about 16 million By default VXLAN technology is chosen This is because for VXLAN the number of network IDs available along with the automatic config uration of these IDs means that the cluster can scale further and more easily than for VLAN Selecting a network isolation type is mandatory unless user instances are configured to allow access to the internal network of the cluster by the administrator figure 2 11 Presently only one type of network isolation is supported at a time 2 1 13 VXLAN Configuration 1 Introduction 4 Bright managed instances 2 General questions 5 External Network
12. enable support for user instances you will be asked several additional questions related to configuring the underlying networking for those instances Those questions will include selecting the user network isolation method VLAN or VXLAN and the possibility to give the user instances access to the cluster s internal network internalnet Do you want to enable User Instances on your OpenStack deployment Yes Figure 2 11 User Instances Screen The User Instances screen figure 2 11 allows the administrator to allow the Bright end user to create user instances The following overview may help get a perspective on this part of the wizard configuration procedure The main function of OpenStack is to manage virtual machines From Bright Computing Inc 13 14 OpenStack Installation the administrator s point of view there are two classes of virtual ma chines or instances e User instances configured in this section e Bright managed instances configured with the help of figure 2 16 The wizard allows OpenStack to be configured to support both types of instances or only one of them Deploying OpenStack without configuring it for either type of instance is also possible but such an OpenStack cluster is very limited in its func tionality and typically has to be customized further by the administrator Both types of instances are virtual machines hosted within a hypervi sor managed by the OpenStack compute pro
13. not Bright managed instances are to be allowed is set later in the Bright managed instances screen figure 2 16 Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 2 1 11 User Instance Isolation from Internal Cluster Network 1 Introduction 4 Bright managed instances 2 General questions 5 External Network 6 Summary amp Deployment User Instance Isolation from Internal Cluster Network This OpenStack deployment can optionally be configured to give the virtual instances created by end users access to the cluster s internal network Doing so might have serious security implications and is advised only if the users creating the VMs can be considered trusted Essentially enabling this gives users the ability to start a virtual machine to which they have root access that is connected directly to the cluster s internal network Do you want to allow user instances to be able to connect to the cluster s internal network internalnet Yes e No Previous Next Figure 2 12 User Instance Isolation from Internal Cluster Network Screen If the creation of user instances has been enabled figure 2 11 the user in stance internal cluster network isolation screen figure 2 12 allows the ad ministrator to allow OpenStack end users to create user instances which have direct network connectivity to the cluster s internal network If the network isolation restriction is removed so that there is net
14. set either to be an existing category or a new category can be created If an existing category is used then default can be chosen If Ceph has been integrated with Bright Cluster Manager then the ceph category is another available option Creating a new category is recommended and is the default option The suggested default category name is openstack compute hosts O Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 2 1 5 OpenStack Network Node 1 Introduction 4 Bright managed instances 5 External Network 3 User Instances 6 Summary amp Deployment OpenStack Network Node OpenStack deployments require a single dedicated network node to run several networking related services To do so you can either select an existing category containing a single node or create a new category and put a single node into it a node which has not been specified as a compute host in the previous screen or do not create a new category but instead select a single node to have the network node role assigned to it directly in this case the node can also be a compute host specified in the previous screen Use existing category Create new category Don t create category specify anode nodeD01 Cancel Previous Figure 2 6 OpenStack Network Node Screen The network node screen figure 2 6 makes a node the network node or makes a category of nodes the network nodes A network node is a dedicated n
15. table Floating IP Address Outbound Allowed Inbound Allowed Not enabled no no One set disabled yes no Two or more enabled yes yes 2 1 18 Allow Outbound Traffic Allow Outbound Traffic Floating IP configuration was disabled in the previous step and the networking configuration will be altered to support user instances Due to both of those facts in order for user instances to have outbound network connectivity a single IP address from the external network should be allocated Enable outbound connectivity for user instances se Yes No Outbound IP 10 2 0 0 Without the dedicated IP for outbound connections user instances will only be able to reach the outside if they are also connected to the cluster s internal network Note that earlier in the wizard access to the cluster s internal network for user instances has been disabled Should it be enabled Yes No Cancel Previous Figure 2 19 Allow Outbound Traffic Screen The Allow Outbound Traffic screen appears if in the previous screen figure 2 18 Floating IPs have not been configured and if the No option has been selected and if there are user instances being configured The Allow Outbound Traffic screen does not appear if the clus ter has only Bright managed instances and no user managed instances This is because Bright managed instances can simply route their traffic through the head node without needing the network node to be adjus
16. the root and ephemeral device data belonging to their hosted virtual machines stored somewhere The default directory location for these images and related data is under var lib nova If Ceph is not enabled then a local mount point of every compute host is used for data storage by default If Ceph is enabled then Ceph storage is used instead according to its mount configuration In either case compute hosts that provide storage for virtual machines by default store the virtual machine images under the var partition However the default partition size for var is usually only large enough for basic testing purposes For production use with or without Ceph the administrator is advised to either adjust the size of the existing parti tion using the disksetup command section 3 9 3 of the Administrator Manual or to use sufficient storage mounted from elsewhere To change the default paths used by OpenStack images the following two path variables in the Nova configuration file et c nova nova conf should be changed 1 state_path var lib nova 2 lock_path var lib nova tmp If Sstate_path has been hardcoded by the administrator elsewhere in the file the loca tion defined there should also be changed accordingly Bright Computing Inc 11 12 OpenStack Installation 2 1 7 OpenStack Internal Network Selection 1 Introduction 4 Bright managed instances 5 External Network 3 User Instances 6 Summary amp Deployment Inte
17. work promiscuity between user instances on the internal network then this allows user instances figure 2 11 to connect to other user instances on the internal network of the cluster End users can then manage other user instances Allowing this should only be acceptable if all users that can create instances are trusted 2 1 12 Network Isolation 1 Introduction 4 Bright managed instances 2 General questions 5 External Network 6 Summary Deployment Network Isolation OpenStack can allow users to create their own private networks and connect their user instances to it The user defined networks must be isolated in the backend using either VLAN or VXLAN technology Using VLAN isolation in general results in better performance However the downside is that the administrator needs to configure the usable VLAN IDs in the network switches Therefore the number of user defined networks is limited by the number of available VLAN IDs Using VXLANs on the other hand generates some overhead but does not require specific switch configuration and allows for creating a greater number of virtual networks Do you want to use VLANs or VXLANS VLAN VXLAN Cancel Previous Next Figure 2 13 Network Isolation Screen The network isolation screen figure 2 13 allows the administrator to set the virtual LAN technology that user instances can use for their user defined private networks Using virtual LANs isolates the IP networks used
18. 0 2 255 255 Usable addresses 655354 You have to specify the IP address range for Floating IPs What is the last IP address of the Floating IP address range 10 2 0 1280 Figure 2 50 External Network Ending Floating IP A screen similar to figure 2 50 allows the administrator to specify the end ing floating IP address on the external network 2 2 23 VNC Proxy Hostname The VNC Proxy Hostname must be set to allow remote access to the virtual consoles of the virtual nodes from anywhere on the external networks The VNC Proxy Hostname is an OpenStack terminology for the Fully Qualified Domain Name FQDN which resolves and routes to the headnode You can always change the value later and an IP address can also be used instead What s the externally visible FQDN or an external IP of the headnode max 32 characters bright70 brightcomputing com Figure 2 51 VNC Proxy Hostname The VNC Proxy Hostname screen figure 2 51 lets the administrator set the FODN as seen from the external network An IP address can be used instead of the FODN 2 2 24 Dedicated Default Tenant Do you want CMDaemon to automatically create a dedicated default tenant for each user granted access to the OpenStack cloud Doing so Will allow administrators to set individual resource limits for each user of the cloud Automatically create a new tenant for each user no Figure 2 52 Setting A Dedicated Default Tenant The dedicated defa
19. 2 26 Informative Text Prior To Deployment If deployment is selected in the preceding screen an informative text screen figure 2 26 gives a summary of what the script does Bright Computing Inc 2 2 Installation Of OpenStack From The Shell 2 2 3 Pre Setup Suggestions Before starting deployment take note of the following Notice A node named networknode was not found It is advised but not required to rename one of the compute nodes to networknode and reboot it before deploying OpenStack The node can then later be selected by the wizard as the network node of the OpenStack deployment Renaming the node before deploying OpenStack is recommended because changing the hostname of a network node after deployment can take more effort Figure 2 27 Pre Setup Suggestions The pre setup suggestions screen figure 2 27 suggests changes to be done before going on 2 2 4 MySQL root And OpenStack admin Passwords What is the MySQL password for the root user The script requires it to configure OpenStack MySQL databases for OpenStack services Password text is hidden poa Figure 2 28 MySQL root Password Screen Specify the password for the OpenStack main admin user This username and password can be used to interact with OpenStack directly using the OpenStack dashboard You can easily change this password at a later time Admin password poi Figure 2 29 OpenStack admin Password Screen
20. Bright Cluster Manager 7 0 OpenStack Deployment Manual Revision 6039 Date Tue 10 Mar 2015 2 Bright Computing 2015 Bright Computing Inc All Rights Reserved This manual or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Bright Computing Inc Trademarks Linux is a registered trademark of Linus Torvalds PathScale is a regis tered trademark of Cray Inc Red Hat and all Red Hat based trademarks are trademarks or registered trademarks of Red Hat Inc SUSE is a reg istered trademark of Novell Inc PGI is a registered trademark of The Portland Group Compiler Technology STMicroelectronics Inc SGE is a trademark of Sun Microsystems Inc FLEXIm is a registered trademark of Globetrotter Software Inc Maui Cluster Scheduler is a trademark of Adaptive Computing Inc ScaleMP is a registered trademark of ScaleMP Inc All other trademarks are the property of their respective owners Rights and Restrictions All statements specifications recommendations and technical informa tion contained herein are current or planned as of the date of publication of this document They are reliable as of the time of this writing and are presented without warranty of any kind expressed or implied Bright Computing Inc shall not be liable for technical or editorial errors or omissions which may occur in this document Bright Computing Inc shall not be liable for any damages resultin
21. Cephiintroducho Nasa Oa ee ee OR Ee ES 3 2 3 3 3 4 21 1 Oy 3 1 3 Ceph Installation With cm ceph setup Spam 32 2 HZ 3 2 4 Ceph Object And Block Storage Recommended Filesystem For Ceph Use Hardware For Ceph Use ems Cep a SSL ae aka a ha Bow a nA R Gow a A Starting With Ceph Installation Removing Previ ous Ceph Installation Ceph Monitors Configuration Ceph OSDs Configuration Checking And Getting Familiar With Ceph Items After cm ceph setup 3 3 1 J92 Checking On Ceph And Ceph related Files From The SNE Riches dese 0 AS rd A ir Ceph Management With cmgui And cmsh RADOS GW Installation Initialization And Properties 3 4 1 3 4 2 3 4 3 RADOS GW Installation And Initialization With cm radosgw setup Setting RADOS GW Properties Turning Keystone Authentication On And Off For RADOS GW 36 37 37 37 38 38 38 39 39 39 41 41 41 42 42 43 43 43 45 46 49 49 50 53 Preface Welcome to the OpenStack Deployment Manual for Bright Cluster Manager 7 0 0 1 About This Manual This manual is aimed at helping cluster administrators install under stand configure and manage basic OpenStack capabilities easily using Bright Cluster Manager The administrator is expected to be reasonably familiar with the Administrator Manual 0 2 About The Manuals In General Regularly updated versions of
22. Contents The manuals constantly evolve to keep up with the development of the Bright Cluster Manager environment and the addition of new hard ware and or applications The manuals also regularly incorporate cus tomer feedback Administrator and user input is greatly valued at Bright Computing So any comments suggestions or corrections will be very gratefully accepted at manuals brightcomputing com 0 3 Getting Administrator Level Support Unless the Bright Cluster Manager reseller offers support sup port is provided by Bright Computing over e mail via support brightcomputing com Section 10 2 of the Administrator Manual has more details on working with support Introduction OpenStack is an open source implementation of cloud services It is cur rently 2015 undergoing rapid development and its roadmap is promis ing A relatively stable implementation of OpenStack based on the Open Stack Icehouse release https www openstack org software icehouse is integrated into the Bright Cluster Manager 7 0 for Open Stack edition By relatively stable it is meant that OpenStack itself is usable and sta ble for regular use in common configurations but not quite production ready when carrying out some less common configuration changes In a complex and rapidly evolving product such as OpenStack the number of possible unusual configuration changes is vast As a result the experience of Bright Computing is that Bright Cluster Manag
23. Ds Example bright70 gt ceph z osdinfo ceph Bright Computing Inc 52 Ceph Installation OSD id Node OSD name node001 osd0 node002 osdo0 Within a device or category mode the roles submode allows param eters of an assigned cephosd role to be configured and managed Example bright 70 gt category default gt roles show cephosd Parameter Value Name cephosd OSD associations lt 1 in submode gt Provisioning associations lt 0 internally used gt Revision Type CephOSDRole Within the cephosd role the templates for OSD filesystem associa tions osdassociations can be set or modified Example bright 70 gt category default gt roles use cephosd bright70 default gt roles cephosd osdassociations b i Ont TO 0Ssd gt osdassociatai ns list f mame 10 osddat as 30 name key osddata osd0 var lib ceph osd Scluster Sid obright70 osd gt osdassociations list f journaldata 38 journalsize 11 name key journaldata journalsize osd0 jiar lib ceph osd scluster Sid journal 0 The f option is used here with the 1ist command merely in order to format the output so that it stays within the margins of this manual The cmgui equivalent of the preceding cmsh settings is accessed from within a particular Nodes or Categories item in the resource tree then accessing the Ceph tab and then choosing the OSD checkbox The Advanced button allows cephosd role parameters to be set for
24. Load XML button of first screen figure 2 2 After loading the configuration a Deploy button appears Clicking the Deploy button that appears in figure 2 2 after loading the XML file or clicking the Deploy button of figure 2 22 sets up OpenStack in the background The direct background progress is hidden from the administrator and relies on the text based cm openstack setup script section 2 2 Some log excerpts from the script are displayed within a Deployment Progress window figure 2 24 Bright Computing Inc 2 2 Installation Of OpenStack From The Shell Deployment Progress Deployment progress is shown in the window below Deploying OpenStack Performing Environment Precheck Ensuring all properties are set License Check Cluster Configuration Precheck Final Configuration Precheck Initializing Setup Cloning Base Software Image into Image openstack mage O O O O Detailed deployment log Hide Figure 2 24 OpenStack Deployment Progress Screen At the end of its run the cluster has OpenStack set up and running in an integrated manner with Bright Cluster Manager The administrator can now configure the cluster to suit the particular site requirements 2 2 Installation Of OpenStack From The Shell The cmgui OpenStack installation section 2 1 uses the cm openstack setup utility during deployment hidden from normal view The installation can also be done directly with cm openstack set
25. Of OpenStack From cmgui e Using Ceph for image storage instead of on image storage nodes This is instead of using the OpenStack Glance reference project for the implementation of virtual machine image storage e Using Ceph for root and ephemeral disks storage instead of on the filesystem of the compute hosts This is instead of using the Open Stack Nova reference project implementation for the implementa tion for disk filesystem storage Object Storage Swift Project In addition the screen also asks if the RADOS Object Gateway should be deployed RADOS Object Gateway is a high level interface that allows Ceph to be accessed in the same way as Amazon S3 or OpenStack Swift using their HTTP APIs The higher level RADOS Object Gateway should not be confused with RADOS a lower level interface that the gateway relies on If RADOS Object Gateway is enabled it means that users have access to object storage using Ceph instead of using the OpenStack Swift refer ence project implementation for object storage The RADOS Object Gateway can be added separately from this OpenStack installation wizard after it has completed by using the cm radosgw setup utility section 3 4 are valid only inside the Nova project and are completely separate storages from Cinder These have nothing to do with the Cinder reference implementation so that carrying out a snapshot for these storages does not display such an error 2 Compute hosts need to have
26. addresses 65534 You have to specify the IP address range for Bright managed instances What is the last IP address of the Bright managed instance address range 10 141 159 2550 Figure 2 45 Ending IP Address For Bright managed Instances An ending IP address must be set for the Bright managed instances fig ure 2 45 2 2 18 Number Of Virtual Nodes For Bright managed Instances How many Bright managed virtual nodes do you want this setup script to create Bright managed virtual node is a regular node that is instantiated as a Virtual Machine upon power on inside your OpenStack cluster It is then provisioned with a Bright software image It will run a CMDaemon and will be accessible via an IP address from your internal network You can always easily add additional Bright managed virtual nodes later for example by cloning the existing ones How many virtual nodes do you want to create Figure 2 46 Number Of Virtual Nodes For Bright managed Instances The number of Bright managed virtual machines must be set figure 2 46 The suggested number of instances in the wizard conforms to the defaults that OpenStack sets These defaults are based on an overcommit ratio of virtual CPU real CPU of 16 1 and virtual RAM real RAM of 1 5 1 The in stance flavor chosen then determines the suggested number of instances Bright Computing Inc 2 2 Installation Of OpenStack From The Shell 2 2 19 DHCP And Static IP Addresses Yo
27. assign Floating IP addresses to Bright managed instances Currently however this has to be done via the Open Stack API or Dashboard The inbound external traffic screen figure 2 18 reserves a range of IP addresses within the used external network In this example it happens to fall within the 10 2 0 0 16 network range If specifying a Floating IP range a single IP address from this range is always reserved by OpenStack for outbound traffic This is imple mented via sNAT The address is reserved for instances which have not been assigned a Floating IP Therefore the IP address range specified in this screen normally expects a minimum of two IP addresses one for reserved for outbound traffic and one Floating IP If the administrator would like to allow OpenStack instances to have outbound connectivity but at the same time not have floating IPs then this can be done by e not configuring Floating IPs by selecting the No option and e in the next screen figure 2 19 specifying a single IP address Alternatively a user instance can be configured to access the external network via the head node however this is a bit more complicated to set Bright Computing Inc 2 1 Installation Of OpenStack From cmgui up Since outgoing connections use one IP address for all instances the remaining number of IP addresses is what is then available for Floating IPs The possible connection options are therefore as indicated by the following
28. but allows only for 4096 networks and typically requires additional manual changes in the switch configuration Network isolation based on VXLANS does not require changes in the switch configuration allows for over 16 000 000 networks but is typically slower than VLANs User network isolation type VLAN Virtual LAN XLANBVirtual Extensible LA access via internal network Figure 2 38 Network Isolation Type The network isolation type screen figure 2 38 allows the administrator to choose what kind of network isolation type if any should be set for the user networks Bright Computing Inc OpenStack Installation 2 2 11 Choosing The Network That Hosts The User Networks There is one existing network which can be used as the VXLAN host network Create a new VXLAN host network or use an existing network Create a new VXLAN host network reate new internalneti 10 141 0 0 16 nodes on network 0 Figure 2 39 User Networks Hosting If the user networks has their type VXLAN VLAN or no virtual LAN chosen in section 2 2 10 then a screen similar to figure 2 39 is displayed This allows one network to be set as the host for the user networks If there are one or more possible networks already available for host ing the user networks then one of them can be selected Alternatively a completely new network can be created to host them 2 2 12 Setting The Name Of The Hosting Network For User Networks
29. cess Off command line cm radosgw setup o0 cm radosgw setup installation cmsh device ra set enablekeystoneauth set enablekeystoneauth dosgateway role entication yes commit entication no commit cmgui device Ticked checkbox for Enable Unticked checkbox for Enable Ceph subtab Keystone Authentication Keystone Authentication Table 3 4 3 Allowing OpenStack instances to access Ceph RADOS GW Bright Computing Inc
30. dy to be deployed SUMMARY nodes with OpenStack roles internal network noVNC hostname access starage backend cinder storage backend glance storage backend nova Rados Gateway USER INSTANCES network isolation internal net access EXTERNAL NETWORK external network external net IP range floating IPs outbound access for VMs BRIGHT MANAGED INSTANCES 2 internalnet bright70 brightcomputing com nfs ceph ceph yes yes VXLAN yes yes externalnet AO z al to altel ee oy Let no no yes vnode count 3 vnode IP range alo yal al Elaipla Geta llo abel ojo ala Generate XML config Dump the current configuration to a file ontinuel Deploy OpenStack abort Abort deployment Figure 2 55 Pre deployment summary Bright Computing Inc 39 40 OpenStack Installation The pre deployment summary screen figure 2 55 displays a summary of the settings that have been entered using the wizard and prompts the administrator to deploy or abort the installation with the chosen settings The options can also be saved as an XML configuration by default cm openstack setup conf in the directory under which the wizard is running This can then be used as the input configuration file for the cm openstack setup utility using the c option 2 2 28 The State After Running cm openstack setup At this point the head node has OpenStack installed on it A regular node that has been configured
31. e Bright Computing Inc 48 Ceph Installation dev sda dev sdb dev sdc and dev sda dev sdb dev sdc are equivalent For the OSD Data path the recommended and default value is Vary LO cepos d Clu Eo r oid Here Scluster is the name of the head node of the cluster idis a number starting from 0 For the Journal path the recommended and default value is var lib ceph osd Scluster S id journal The Journal size in MiB can be set for the category or node A value set here overrides the default global journal size setting fig ure 3 3 This is just the usual convention where a node setting can override a category setting and a node or category setting can both override a global setting Also just like in the case of the global journal size setting a jour nal size for categories or nodes must always be greater than zero Defining a value of 0 MiB means that the default that the Ceph soft wate itself provides is set At the time of writing March 2015 Ceph software provides a default of 5GiB The Journal size fora category or node is unset by default which means that the value set for Journal size in this screen is deter mined by whatever the global journal size setting is by default Setting Journal on partition to yes means that the OSD uses a dedicated partition In this case The disk setup used is modified so that the first partition with a size of Journal size is used
32. e dedicated physical networks screen figure 2 15 allows the follow ing to be configured e the compute nodes that host the OpenStack instances e the network nodes that provides networking services for OpenStack For each of these types of node compute node or network node the interface can e either be shared with internalnet using an alias interface e or created separately on a physical network interface The interface must then be given a name The name can be arbitrary Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 2 1 15 Bright Managed Instances Bright Managed Instances Bright managed instances are created by the cluster administrators using the Bright CMDaemon AP e g using CMSH or CMGUI They are provisioned using a Bright software Image and thus have CMDaemon running and are managed just like regular Bright compute nodes Regular OpenStack end users are typically not able to create Bright Managed instances unless given access to the Bright API Do you want to enable Bright managed instances on your OpenStack deployment e Yes No Bright managed instances will be connected to the clusters internal network internalnet OpenStack needs to know which range of IP addresses it can use for Bright managed instances The range should not include any IP addresses which are already in use by other devices on the internal network Please specify the allocation pool for the Bright managed
33. ed instances no Figure 2 43 Enabling Support For OpenStack Instances Under Bright Cluster Manager There are two kinds of OpenStack instances also known as virtual nodes that can run on the cluster These are called user instances and Bright managed instances The screen in figure 2 43 decides if Bright managed instances to run Bright managed instances are actually a special case of user instances just managed much more closely by Bright Cluster Man ager Only if permission is set in the screen of section 2 2 9 can an end user access Bright managed instances The screens from figure 2 44 to figure 2 46 are only shown if support for Bright managed instances is enabled Bright Computing Inc 35 36 OpenStack Installation 2 2 16 Starting IP Address For Bright managed Instances The network internalnet has Base IP address 10 141 0 0 Broadcast IP address 10 141 255 255 Usable addresses 65534 You have to specify the IP address range for Bright managed instances What is the first IP address of the Bright managed instance address range 10 141 96 00 Figure 2 44 Starting IP Address For Bright managed Instances A starting IP address must be set for the Bright managed instances fig ure 2 44 2 2 17 Ending IP Address For Bright managed Instances First IP 10 141 96 0 now specify the last IP The network internalnet has Base IP address 10 141 0 0 Broadcast IP address 10 141 255 255 Usable
34. ed cm openstack setup utility section 2 2 The utility is a part of the standard cluster tools package The priorities that the package manager uses are expected to be at their default settings in order for the installation to work By default deploying OpenStack installs the following projects Key stone Nova Cinder Glance Neutron Heat and Horizon the dashboard If Ceph is used then Bright also can also optionally deploy RADOS Gateway to be used as a Swift API compatible object storage system Us ing RADOS Gateway instead of the reference Swift object storage is re garded in the OpenStack community as good practice and is indeed the only object storage system that Bright Cluster Manager manages for Open Stack Alternative backend storage is possible at the same time as object storage which means for example that block and image storage are op tions that can be used in a cluster at the same time as object storage 2 1 Installation Of OpenStack From cmgui The cmgui OpenStack Setup Wizard is the preferred way to install OpenStack A prerequisite for running it is that the head node should be connected to the distribution repositories Some suggestions and background notes These are given here to help the administrator understand what the setup configuration does and to Bright Computing Inc OpenStack Installation help simplify deployment Looking at these notes after a dry run with the wizard will probably be help
35. eployment as a backend data storage system for volumes Cinder images Glance and roottephemeral disk images Nova Compute Using Ceph as the backend introduces noticeable performance improvements in the areas of instance migration volume snapshotting booting new instances and overall reliability Do you want to use Ceph as the backend for the following services Volume storage Cinder Yes store volumes in Ceph No store volumes on an NFS share Image storage Glance Yes store images in Ceph No store images on the image storage nodes Root and ephemeral disks storage Nova Yes store root and ephemeral disks in Ceph No store root and ephemeral disks on compute hosts RADOS Object Gateway also known as Ceph Object Gateway is an HTTP gateway for the Ceph object store lt exposes Ceph s storage capabilities using the OpenStack Swift or AWS 53 compatible APIs It effectively allows the end users to store their own data as objects inside Ceph using a REST HTTP interface RADOS gateway is an optional component of an OpenStack deployment Do you want to deploy the RADOS Object Gateway Yes give users access to object storage No do not configure object storage Note that Ceph object storage capabilities can be easily added later on by manually running cm radosgw setup See the Bright Cluster Manager Administrator Manual for details Previous Figure 2 7 Ceph Configuration Screen If Ceph has been configured with
36. er can sometimes run into OpenStack issues while implementing the less common OpenStack configurations As one of the supporting organizations of OpenStack Bright Comput ing is committed towards working together with OpenStack developers to help Bright customers resolve any such issue The end result after re solving the issue means that there is a selection pressure that helps evolve that aspect of OpenStack to become convenient and stable for regular use This process benefits all participants in the OpenStack software ecosys tem OpenStack consists of subsystems developed as software projects A software project provides capabilities to OpenStack via the implemen tation of a backend service and thereby provides an OpenStack service The OpenStack service can thus be implemented by interchangeable back ends which projects can provide For example the OpenStack Cinder project provides block storage ca pabilities to OpenStack via the implementation of for example NFS or Ceph block storage The OpenStack s block storage service can therefore be implemented by interchangable backends As far as the user is con cerned the result is the same An analogy to OpenStack is operating system packaging as provided by distributions The term projects must not be confused with the term used in OpenStack elsewhere where projects or sometimes tenants are used to refer to a group of users Bright Computing Inc Introduction An o
37. er displaying a summary of what will be changed When run in dry mode no changes will be introduced on the cluster Dry mode Yes No Before deploying OpenStack it s advised to go through the OpenStack deployment Manual It contains multiple tips on how to prepare your cluster for deploying OpenStack Load XML Figure 2 2 OpenStack Setup Wizard Overview Screen The overview screen figure 2 2 explains what information is gathered and what the wizard intends to do with the information The screen also asks for input on the following e Should the regular nodes that become part of the OpenStack cluster be rebooted A reboot installs a new image onto the node and is recommended if interface objects need to be created in CMDaemon for OpenStack use Creating the objects is typically the case during the first run ever for a particular configuration Subsequent runs of the wizard do not normally create new interfaces and for small changes do not normally require a node reboot If in doubt the reboot option can be set to enabled Reboot is enabled as the default e Should a dry run be done In a dry run the wizard pretends to carry out the installation but the changes are not really implemented This is useful for getting familiar with options and their possible consequences A dry run is enabled as the default Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 2 1 2 MySQL Credentials amp OpenStack admin U
38. er level The RADOS GW API is compatible with the APIs of Swift and Amazon S3 3 Ceph Filesystem access CephFS provides a filesystem access layer A component called MDS Metadata Server is used to manage the filesystem with RADOS MDS is used in addition to the OSD and MON components used by the block and object storage forms when CephFS talks to RADOS The Ceph filesystem is not regarded as production ready by the Ceph project at the time of writing July 2014 and is therefore not yet supported by Bright Cluster Manager 3 1 2 Recommended Filesystem For Ceph Use The storage forms of Ceph object block or filesystem can use a filesys tem for storage For production use of Ceph XFS is currently the recom mended filesystem option due to its stability ability to handle extreme storage sizes and its intrinsic ability to deal with the significant sizes of the extended attributes required by Ceph The nodes that run OSDs are typically regular nodes Within the nodes the storage devices used by the OSDs automatically have their filesystems configured to be of the XFS type during the installation of Ceph with Bright Cluster Manager Use Of dat anode For Protection Of OSD Data Typically a filesystem used for an OSD is not on the same device as that of the regular node filesystem Instead typically OSD storage consists of several devices that contain an XFS filesytem with the devices attached to the node These devices need protection fr
39. ful e A VXLAN Virtual Extensible LAN network is similar to a VLAN network in function but has features that make it more suited to cloud computing If VXLANSs are to be used then the wizard is able to help create a VXLAN overlay network for OpenStack tenant networks An OpenStack tenant network is a network used by a group of users allocated to a particular virtual cluster A VXLAN overlay network is a Layer 2 network overlaid on top of a Layer 3 network The VXLAN overlay network is a virtual LAN that runs its frames encapsulated within UDP packets over the regular TCP IP network infrastructure It is very similar to VLAN technology but with some design fea tures that make it more useful for cloud computing needs One major improvement is the around 16 million VXLANSs that can be made to run over the underlying Layer 3 network This is in contrast to the 4 000 or so VLANs that can be made to run over their underlying Layer 2 network if the switch port supports that level of simultaneous capability By default if the VXLAN network and VXLAN network object do not exist then the wizard helps the administrator create a vxlanhostnet network and network object section 2 1 13 The network is attached to and the object is associated with all non head nodes taking part in the OpenStack deployment If a vxlanhostnet network is pre created beforehand then the wizard can guide the administrator to associate a network object with it
40. g from the use of this docu ment Limitation of Liability and Damages Pertaining to Bright Computing Inc The Bright Cluster Manager product principally consists of free software that is licensed by the Linux authors free of charge Bright Computing Inc shall have no liability nor will Bright Computing Inc provide any warranty for the Bright Cluster Manager to the extent that is permitted by law Unless confirmed in writing the Linux authors and or third par ties provide the program as is without any warranty either expressed or implied including but not limited to marketability or suitability for a specific purpose The user of the Bright Cluster Manager product shall accept the full risk for the quality or performance of the product Should the product malfunction the costs for repair service or correction will be borne by the user of the Bright Cluster Manager product No copyright owner or third party who has modified or distributed the program as permitted in this license shall be held liable for damages including gen eral or specific damages damages caused by side effects or consequential damages resulting from the use of the program or the un usability of the program including but not limited to loss of data incorrect processing of data losses that must be borne by you or others or the inability of the program to work together with any other program even if a copyright owner or third party had been advised about the
41. icking the Enable Keystone Authentication check box and saving the setting makes RADOS GW services available to Open Stack instances if they have already been initialized section 3 4 1 to work with Bright Cluster Manager Bright Computing Inc 3 4 RADOS GW Installation Initialization And Properties 55 Server Port 18888 Server Root Server Script 3gw fcgi Server Socket tmp radosgw sot Module mod fastcgi so Y Enable Keystone Authentication NSS database path var lib ceph nss Keystone Revocation Interval 600 Keystone Tokens Cache Size 501 Keystone Accepted Roles Figure 3 10 RADOS GW Advanced Role Settings In cmgui 3 4 3 Turning Keystone Authentication On And Off For RADOS GW If command line installation is carried out for Bright Cluster Manager for the first time with cm radosgw setup o section 3 4 1 then RADOS GW is initialized for OpenStack instances This initialization for RADOS GW for OpenStack is not done by the cmsh commits or cmgui checkbox changes section 3 4 2 So toggling Ceph availability to OpenStack instances with cmgui and cmsh sec tion 3 4 3 is only possible if cm radosgw setup o has been run once before to initialize the configuration section 3 4 1 If RADOS GW has been installed then OpenStack instances can access it if Keystone authentication for RADOS GW is on The following table summarizes how authentication is turned on and off Via RADOS GW Access On RADOS GW Ac
42. instances IP range start 10 141 96 0 IPrange end 10 141 159 255 Cancel Previous Nex Figure 2 16 Bright Managed Instances Screen The Bright managed instances screen figure 2 16 allows administrators to enable Bright managed instances These instances are also known as virtual nodes Administrators can then run OpenStack instances using Bright Cluster Manager End users are allowed to run OpenStack instances managed by Open Stack only if explicit permission has been given This permission is the default that is set earlier on in the user instances screen figure 2 11 If Bright managed instances are enabled then an IP allocation scheme must be set The values used to define the pool are e IP range start By default this is 10 141 96 0 e IP range end By default this is 10 141 159 255 The screens shown in figures 2 17 to 2 21 are displayed next if Bright managed instances are enabled Bright Computing Inc 19 OpenStack Installation 2 1 16 Virtual Node Configuration 1 Introduction 2 General questions 5 External Network 3 User Instances 6 Summary amp Deployment Virtual Node Configuration How many Bright managed instances should be created Additional Bright managed instances can easily be added later on Virtual node count Virtual node prefix vnode base name for the virtual nodes Do you want to use an existing category for your v
43. irtual nodes or do you want to create a new one Choosing to create a new category for virtual nodes will allow you to also specify a different software image for the virtual nodes e Create anew category Use existing category Virtual node category virtual nodes Base category default X Do you want to use an existing software image for your virtual nodes or do you want to create a new one The selected software image will have to be modified by the wizard e Create anew software image Use existing software image Use software image from category Software image virtual node image Base software image default image X You can assign IP addresses to your virtual nodes using DHCP assigned IP addresses or static IP addresses These will be in sequence starting from an address that you must specify e DHCP Static Cancel Previous Next Figure 2 17 Virtual Node Configuration Screen The virtual node configuration screen figure 2 17 allows the adminis trator to set the number category and image for the virtual nodes The suggestions presented in this screen can be deployed in a test cluster e Virtual node count 5 e Virtual node prefix vnode A Virtual node category can be set for virtual nodes further down in the screen During a new deployment virtual nodes can be placed in categories either by creating a new category or by using an existing category e Ifthe Create a new category radio button is selected
44. ject Nova The main differ ence between these two types instances include the following e User instances are typically created and managed by the end users of the deployment either directly via the OpenStack API or via OpenStack Dashboard outside of direct influence from Bright Clus ter Manager User instances are provisioned using any OpenStack compatible software image provided by the user and thus have no CMDaemon running on them User instances are attached to user created virtual networks Optionally they can be allowed to connect directly to the cluster s internal network section 2 1 11 The num ber of user instances that can be run is not restricted in any way by the Bright Cluster Manager license e Bright managed instances sometimes also called virtual nodes are typically created and managed by the cluster cloud ad ministrators using CMDaemon via cmsh or cmgui or pythoncm They are provisioned using a Bright software image and therefore have CMDaemon running on them Because of CMDaemon the ad ministrator can manage Bright managed instances just like regular nodes under Bright Cluster Manager Bright managed instances are always connected to the cluster s internal network but can also be attached to user created networks To allow user instances to be created the Yes radio button should be ticked in this screen This will lead to the wizard asking about user instance network isolation VLAN VXLAN Whether or
45. ll guide you through the process of deploying OpenStack The wizard will first ask you several questions It will then display a summary outlining the OpenStack deployment Once the deplyment summary has been accepted it will deploy OpenStack Information gathered from the user by the wizard includes basic cluster credentials with MySQL root password among those category and software image selection for the OpenStack compute nodes selecting OpenStack compute host nodes run nova compute host VMs selecting the network node hosts neutron agents and Floating IPs whether to configure OpenStack to support Bright managed and or user instances tenant network isolation type VLAN or VXLAN generic network configuration configuring access to external network if any for network node gathering other network interface information major changes introduced in the cluster s configuration by the wizard may include creation of network objects to represent the VLAN Host or VXLAN host network creation of a OpenStack software image used for booting OpenStack nodes creation of a virtual node software image used for booting Bright managed instances assignment of OpenStack roles to selected OpenStack nodes creation of node categories dedicated for OpensStack ensuring all selected OpenStack nodes are in a OpenStack enabled category introducing network interface changes to OpenStack nodes IIA TO Continue Figure
46. mary amp Deployment VNC Proxy Hostname In order to be able to remotely access the virtual consoles of the virtual nodes from anywhere on the external networks a Fully Qualified Domain Name FQDN domain name or an IP address resolving and routing to the headnode has to be specified It will be used as the so called VNC Proxy Hostname What s the externally visible FODN or an external IP of the headnode VNC Proxy Hostname bright 0 brightcomputing com l ancel Previous Next Figure 2 21 VNC Proxy Hostname Screen The VNC proxy hostname screen figure 2 21 sets the FODN or external IP address of the head node of the cluster as seen by a user that would like to access the consoles of the virtual nodes from the external network For a cluster with a hostname bright70 and resolving within the brightcomputing com network domain the value of VNC Proxy Hostname would be bright70 brightcomputing com 2 1 21 Summary Summary OpenStack setup wizard has been completed however the specified OpenStack deployment configuration has not been deployed to the cluster yet This can be done automatically by clicking the Deploy button below Alternatively clicking the Show Configuration button will produce an XML configuration file which can be further customized if needed and then used as the input configuration file for the cm openstack setup command line utility Overview Support for Userinstances
47. n Of OpenStack From The Shell 27 22k o e A Sp Gy e a e E EE E A 28 2 2 2 Informative Text Prior To Deployment 28 2 2 3 Pre Setup SUSBCSHONS s seya eee es Gas 29 2 2 4 MySQL root And OpenStack admin Passwords 29 2 2 5 Reboot After Configuration 30 220 Ceph OPMONS eae EA BS oe ee 30 2 2 7 Internal Network To Be Used For OpenStack 32 220 User Moi eS a a yy a Se GB SS G 32 2 2 9 Virtual Instance Access To Internal Network 33 2 2 10 22 11 2 212 2 213 2 2 14 2 2 15 2 2 16 2 217 2 2 16 2 219 2 220 2 221 2 222 2 2 29 2 2 24 2 2 29 2 2 26 22 27 22 20 Table of Contents Network Isolation Type Choosing The Network That Hosts The User Net works Setting The Name Of The Hosting Network For User Networks Setting The Base Address Of The Hosting Network For User Networks Setting The Number Of Netmask Bits Of The Host ing Network For User Networks Enabling Support For Bright managed Instances Starting IP Address For Bright managed Instances Ending IP Address For Bright managed Instances Number Of Virtual Nodes For Bright managed In stances DHCP And Static IP Addresses Floating IPs External Network Starting Floating IP External Network Ending Floating IP VNC Proxy Hostname Dedicated Default Tenant Nova Compute Hosts Neutron Network Node Pre deployment Summary The State After Running cm openstack setup 3 Ceph Installation onl lt
48. nd Properties 3 4 1 RADOS GW Installation And Initialization With cm radosgw setup The cm radosgw setup utility can be run on the head node after in stalling Ceph with cm ceph setup The cm rados setup utility con figures RADOS so that it provides a REST API called the RADOS GW service If cm radosgw setup is run with the o option then RADOS GW is integrated with OpenStack by enabling Keystone authentication for it Example root bright70 cm radosgw setup o root bright70 If cm radosgw setup is run without the o option then RADOS GW is installed but Keystone authentication is disabled and the gateway is therefore then not available to OpenStack instances Command line installation with the o option initializes RADOS GW for OpenStack instances the first time it is run in Bright Cluster Manager 3 4 2 Setting RADOS GW Properties RADOS GW Properties In cmsh RADOS GW properties can be managed in cmsh by selecting the de vice then dropping into the radosgateway role bright70 device use bright70 bright 70 gt device bright70 roles bright 70 gt device bright70 gt roles use radosgateway Q bright70 gt device bright70 gt roles radosgateway show Parameter Value Name radosgateway Provisioning associations lt 0 internally used gt Bright Computing Inc 53 54 Ceph Installation Revision Type RadosGatewayRole Module mod_fastcgi so Server Port 188868 Server Root var www
49. ndCinder ceph lt backendCinder gt lt backendGlance gt ceph lt backendGlance gt lt backendNova gt ceph lt backendNova gt lt setupRadosgqw yes lt setupRadosgqw lt vitualNode Count 3 lt virtualNodeCount lt VirtualNodePrefix vnode lt virtualNodePretix gt lt vitualNode Cate gory virtual nodes lt virtualNodeCategory gt lt VirtualNodeSoftwarelmage detault image lt virtualNodeSottwarelmage lt softwarelmage gt openstack image lt softwarelmage gt lt VNCProxyHostname gt openstacks brightcomputing com lt VNCProxyHostname gt lt roles group 1 gt lt categories group 1 gt lt computeRole itemList L gt lt item gt openstack compute hosts lt tem lt computeR ole gt lt categories gt anodes group 1 lt networkR ole itemList L gt temenade 00 aite m gt Figure 2 23 OpenStack Configuration Screen The configuration can be saved with the Save Configuration op tion of figure 2 22 After exiting the wizard the XML file can be directly modified if needed in a separate text based editor Using A Saved Configuration And Deploying The Configuration Using a saved XML file is possible e The XML file can be used as the configuration starting point for the text based cm openstack setup utility section 2 2 if run as root bright70 cm openstack setup c lt XML file gt e Alternatively the XML file can be deployed as the configuration by launching the cmgui wizard and then clicking on the
50. nistrator decide if end users are to be allowed to create user instances Bright Computing Inc 2 2 Installation Of OpenStack From The Shell 33 2 2 9 Virtual Instance Access To Internal Network This OpenStack deployment can optionally be configured to give the virtual instances created by end users access to the cluster s internal network Doing so might have serious security implications and is advised only if the users creating the VMs can be considered trusted Essentially enabling this gives users the ability to start a virtual machine to which they have root access that is connected directly to the cluster s internal network Do you want to give user instances access to the internal network no Figure 2 37 Virtual Instance Access To Internal Network The screen in figure 2 37 lets the administrator allow virtual instances ac cess to the internal network This should only be allowed if the users creating the instances are trusted This is because the creator of the in stance has root access to the instance which is in turn connected directly to the internal network of the cluster which means all the packets in that network can be read by the user 2 2 10 Network Isolation Type Which network isolation type is to be used for isolating the traffic on user networks User network isolation is optional as user instance can be created on the cluster internal network Network isolation based on VLANs is faster than WXL Ns
51. ode that handles OpenStack networking services If a category is used to set up network nodes then either an existing category can be used or a new category can be created If a new cate gory is to be created then openstack network host s is its suggested name The option to specify a node as a network node is most convenient in the typical case when all of the non head nodes have been set as belong ing to the compute node category in the preceding screen figure 2 5 Indeed in the case that all non head nodes have been set to be in the compute node category the category options displayed in figure 2 6 are then not displayed leaving only the option to specify a particular node A network node inherits many of the OpenStack related compute node settings but will have some exceptions to the properties of a com pute node Many of the exceptions are taken care of by assigning the openstacknetwork role to any network nodes or network node cate gories as is done in this screen Bright Computing Inc OpenStack Installation 2 1 6 Ceph Configuration 1 Introduction 4 Bright managed instances 5 External Network 3 User Instances 6 Summary amp Deployment Ceph Configuration Ceph isis a distributed object store and file system designed to provide excellent performance reliability and scalability lt can be used also as a drop in replacement for OpenStack s Swift This wizard can configure the existing Bright managed Ceph d
52. odes not being a part of the OpenStack cluster The automatic customization of the software image involves among other things installing OpenStack RPMs to the software image Create reuse image Create new image Use existing image cannot be used if only one software image is defined OpenStack software image openstack image Base software image default image Cancel Previous Next Figure 2 9 OpenStack Software Image Selection Screen The OpenStack software image selection screen figure 2 9 decides the software image name to be used on the nodes that run OpenStack An existing image name can be used if there is more than one available image name Creating a new image with the default name of openstack image is recommended This image openstack image is to be the base Open Stack image and it is cloned and modified from an original pre OpenStack deployment image By default the name openstack image is chosen This is recom mended because the image that is to be used by the OpenStack nodes has many modifications from the default image and it is useful to keep the Bright Computing Inc 2 1 Installation Of OpenStack From cmgui default image around for comparison purposes 2 1 9 OpenStack User Settings Default Tenant Creation Choice 1 Introduction 4 Bright managed instances 5 External Network 3 User Instances 6 Summary amp Deployment OpenStack User Settings Do you want CMDaemon
53. om being wiped during the reprovisioning that takes place during a reboot of regular nodes The recommended way to protect storage devices from being wiped is to set the datanode property of their node to yes page 184 of the Administrator Manual 3 1 3 Hardware For Ceph Use An absolute minimum installation can be carried out on two nodes where Bright Computing Inc 3 2 Ceph Installation With cm ceph setup e 1 node the head node runs one Ceph Monitor and the first OSD e 1 node the regular node runs the second OSD This is however not currently recommended because the first OSD on the head node requires its own Ceph compatible filesystem If that filesystem is not provided then Ceph on the cluster will run but in a de eraded state Using such a system to try to get familiar with how Ceph behaves in a production environment with Bright Cluster Manager is un likely to be worthwhile A more useful minimum if there is a node to spare installing Ceph over 3 nodes is suggested where e 1 node the head node runs one Ceph Monitor e 1 node the regular node runs the first OSD e 1 more node also a regular node runs the second OSD For production use a redundant number of Ceph Monitor servers is recommended Since the number of Ceph Monitoring servers must be odd then at least 3 Ceph Monitor servers with each on a separate node are recommended for production purposes The recommended minimum of nodes for p
54. ompute nodes selecting OpenStack compute host nodes run nova compute host VMs Selecting the network node hosts neutron agents and Floating IPs whether to configure OpenStack to support Bright managed and or user instances tenant network isolation type VLAN or VALAN generic network configuration The major changes introduced in the cluster s configuration by the wizard may include creation of network objects to represent the VLAN Host or VALAN host network creation of a OpenStack software image used for booting OpenStack nodes creation of a virtual node software image used for booting Bright managed instances assignment of OpenStack roles to selected OpenStack nodes creation of node categories dedicated for OpenStack ensuring all selected OpenStack nodes are in a OpenStack enabled category introducing network interface changes to OpenStack nodes note that headnode s network configuration will not be changed After completing the deployment itis advised to reboot the non headnode nodes participating in the Openstack deployment at a minimum perform a SYNC install Ifthe selected nodes are UP then the reboot can also be done by the wizard Should the wizard reboot the OpenStack nodes as part of the deployment process Yes No It s possible to run this wizard in a normal mode and in dry run mode When run in the normal mode changes will be introduced at the very end of the wizard aft
55. owing for Floating IPs and or enabling outbound connec tivity from the virtual machines VMs to the external network via the network node the network node can be pre configured man ually according to how it is connected to the internal and external networks Otherwise if the node is not pre configured manually the wizard then carries out a basic configuration on the network node that configures one physical interface of the network node to be connected to the internal network so that the network node can route packets for nodes on the internal network configures the other physical interface of the network node to be connected to the external network so that the network node can route packets from external nodes The wizard asks the user several questions on the details of how Open Stack is to be deployed From the answers it generates an XML document with the intended configuration Then in the back end largely hidden from the user it runs the text based cm openstack setup script with this configuration on the active head node In other words the wizard can be regarded as a GUI front end to the cm openstack setup utility The practicalities of executing the wizard The explanations given by the wizard during its execution steps are intended to be verbose enough so that the administrator can follow what is happening The wizard is accessed via the OpenStack resource in the left pane of cmgui figure 2 1 Launching the wiza
56. perating system distribution consists of subsystems maintained as packages and their dependencies Some subsystems provide capabili ties to the operating system via the implementation of a backend service The service can often be implemented by interchangeable backends for the subsystem A specific example for an operating system distribution would be the mailserver subsystem that provides mail delivery capabilities to the oper ating system via the implementation of for example Postfix or Sendmail The mailserver package and dependencies can therefore be implemented by interchangeable backends As far as the e mail user is concerned the end result is the same The project that implements the backend can also change if the exter nal functionality of the project remains the same Some of the more common OpenStack projects are listed in the follow ing table Service OpenStack Project Managed By Bright Compute Nova Y Object Storage Swift depends Block Storage Cinder Y Networking Neutron Y Dashboard Horizon Y Identity Service Keystone Y Orchestration Heat Y Telemetry Ceilometer x Database Service Trove x Image Service Glance V Bright Cluster Manager does not manage the OpenStack reference implementation for Swift object storage but does manage a replace ment the API compatible Ceph RADOS Gateway implementation Not all of these projects are integrated or needed by Bright Cluster Manager for a working OpenStack system Fo
57. ph Object Store It exposes Ceph s storage capabilities using the OpenStack Swift or AWS S3 compatible APIs It effectively allows the end users to store their own data as objects inside Ceph using REST HTTP interface RADOS gateway is an optional component of an OpenStack deployment Do you want to enable Rados Object Gateway m Figure 2 34 Root And Ephemeral Device Storage With Ceph Bright Computing Inc 31 32 OpenStack Installation The Ceph RADOS gateway screen figure 2 34 lets the administrator set the Ceph RADOS gateway service to run when deployment completes 2 2 7 Internal Network To Be Used For OpenStack There are multiple internal network on the cluster One of those networks has to picked as the main OpenStack network All nodes which will become OpenStack nodes must be connected to this network Which internalnet net to use for OpenStack ED nternalnet 10 141 0 0 16 nodes on network internalnet1 10 141 0 0 16 nodes on network 0 Figure 2 35 Internal Network To Be Used For OpenStack If there are multiple internal networks then the internal network selec tion screen figure 2 35 lets the administrator choose which of them is to be used as the internal network to which the OpenStack nodes are to be connected 2 2 8 User Instances Do you want to allow end users to create user instances no Figure 2 36 User Instances The user instances screen figure 2 36 lets the admi
58. possibility of such dam ages unless such copyright owner or third party has signed a writing to the contrary Table of Contents 0 1 About This Manual 0 2 About The Manuals In General Table of Contents 0 3 Getting Administrator Level Support iv Introduction 1 OpenStack Installation 3 2 1 Installation Of OpenStack From cmgui 3 2 1 1 OpenStack Setup Wizard Overview 6 2 1 2 MySQL Credentials amp OpenStack admin User 7 2 1 3 OpenStack Category Configuration Z 2 1 4 OpenStack Compute Hosts 8 2 1 5 OpenStack Network Node 9 24 6 Ceph Contiguration sins sos 2 4 ew ig es 10 2 1 7 OpenStack Internal Network Selection 12 2 1 8 OpenStack Software Image Selection 12 2 1 9 OpenStack User Settings Default Tenant Creation COICS sa E A RA Oe a oe 13 Zell User nistan eS a ok a dss tek Fa ee E ae A ee aA 13 2 1 11 User Instance Isolation from Internal Cluster Network 15 21 12 Network Isolation i eat cra 15 2 1 13 VXLAN Configuration 16 2 1 14 Dedicated Physical Networks 18 2 1 15 Bright Managed Instances 19 2 1 16 Virtual Node Configuration 20 2 1 17 Inbound External Traffic 22 2 1 18 Allow Outbound Traffic 23 2 1 19 External Network Interface for Network Node 24 2 1 20 VNC Proxy Hostname 25 Za SUMA ras By ee ae A 25 2 2 Installatio
59. process The bootstrap option can take the following val ues Bright Computing Inc 45 46 Ceph Installation auto This is the default and recommended option If the ma jority of nodes are tagged with auto during the current con figuration stage and configured to run Ceph Monitors then If they are up according to Bright Cluster Manager at the time of deployment of the setup process then the Monitor Map is initialized for those Ceph Monitors on those nodes If they are down at the time of deployment of the setup process then the maps are not initialized true If nodes are tagged true and configured to run Ceph Monitors then they will be initialized at the time of deploy ment of the setup process even if they are detected as being down during the current configuration stage false If nodes are tagged false and configured to run Ceph Monitors then they will not be initialized at the time of de ployment of the setup process even if they are detected as be ing up during the current configuration stage e The data path is set by default to var lib ceph mon cluster hostname where Scluster is the name of the Ceph instance This is ceph by default Shostname is the name of the node being mapped e The Back option can be used after accessing the editing screen to return to the Ceph Monitors configuration screen figure 3 4 3 2 4 Ceph OSDs Configuration This section allows you
60. r example Bright Cluster Manager already has an extensive monitoring system and therefore does not for now implement Ceilometer while Trove is ignored for now because it is not yet production ready Projects that are not yet integrated can in principle be added by ad ministrators on top of what is deployed by Bright Cluster Manager even though this is not currently supported or tested by Bright Computing In tegration of the more popular of such projects and greater integration in general is planned in future versions of Bright Cluster Manager This manual explains the installation configuration and some basic use examples of the OpenStack projects that have so far been integrated with Bright Cluster Manager Bright Computing Inc OpenStack Installation To Use Ceph It Must Be Installed Before Deploying OpenStack If OpenStack is to access Ceph for storage purposes for any combina tion of block storage Cinder image storage Glance ephemeral storage Nova or object storage RADOS Gateway then the Ceph components must first be installed with cm ceph setup Chapter 3 before starting the OpenStack installation procedure covered here Ways Of Installing OpenStack The version of OpenStack that is integrated with Bright Cluster Manager can be installed in the following two ways e Using the GUI based Setup Wizard button from within cmgui section 2 1 This is the recommended installation method e Using the text bas
61. rbd The cmgui equivalent of the overview command is the Overview tab accessed from within the Ceph resource Some of the major Ceph configuration parameters can be viewed and their values managed by CMDaemon from ceph mode The show com mand shows parameters and their values for the ceph instance Example bright 70 gt ceph show ceph Parameter Admin keyring path Bootstrapped Client admin key Cluster networks Config file path Creation time Extra config parameters Monitor daemon port Monitor key Monitor keyring path Public networks Revision auth client required cephx auth cluster required cephx auth service required cephx filestore xattr use omap Sd mon max osd mon osd full ratio mon osd nearfull ratio name osd pool default min size osd pool default pg num osd pool default pgp num osd pool default size version bright 70 gt ceph etc cephn ceph client admin keyring yes AODKkUM5T4LhZFxAA JOHvzvbyb9txH0bwvxUSQ etc ceph ceph conf Thu 25 Sep 2014 13 54 11 CEST 6789 AQDkUM5 TwM21EhAA0CcdH UFhGJ902n3y Avng etc ceph ceph mon keyring yes yes yes no abf8ebaf 711c0 4d75 badc 3b81bc2b74d8 10000 0935 0 85 ceph 0 8 8 2 0 80 5 The cmgui equivalent of these settings is the Settings tab accessed from within the Ceph resource Ceph OSD Properties From within ceph mode the osdinfo command for the Ceph instance displays the nodes that are providing OSDs along with their OSD I
62. rd is only allowed if the Bright Cluster Manager license Chapter 4 of the Installation Manual entitles the license holder to use OpenStack x Bright Cluster Manager 906 Eile Monitoring Filter View Bookmark Help RESOURCES a OpenStack E Bright trunk Cluster Ceph a AL Users amp Groups To configure OpenStack on your cluster run the Setup Wizard Workload Management l W Monitoring Configuration E Authorization Q Authentication D Figure 2 1 The Setup Wizard Button In cmgui s OpenStack Resource The wizard runs through the screens in sections 2 1 1 2 1 21 described next Bright Computing Inc OpenStack Installation 2 1 1 OpenStack Setup Wizard Overview SRI 5right managed instances 2 General questions 5 External Network 3 User Instances 6 Summary amp Deployment OpenStack Setup Wizard Overview This deployment wizard guides through the process of deploying OpenStack The deployment process is composed of two parts In the first part the wizard gathers all the necessary information from the user It then summarizes the gathered information before proceeding During the second part the wizard uses this information to introduce changes to the cluster s configuration Once this is done OpenStack is ready to be used Information gathered from the user by the wizard includes basic cluster credentials with MySQL root password among those Category and software image selection for the OpenStack c
63. rnal Network Selection There are multiple internal networks configured in this cluster One of those internal networks must now be selected as the main internal network of the OpenStack deployment Depending on the choices made later in the wizard the selected network might become the network used for optionally hosting Bright managed instances and or optionally the network to which user created instances can connect to Please specify which internal network is to be used for connecting OpenStack compute hosts OpenStack internal network internalnet Figure 2 8 OpenStack Internal Network Selection Screen The OpenStack internal network selection screen figure 2 8 decides the network to be used on the nodes that run OpenStack 2 1 8 OpenStack Software Image Selection 1 Introduction 4 Bright managed instances 5 External Network 3 User Instances 6 Summary amp Deployment OpenStack Software Image Selection All of the nodes participating in an OpenStack deployment must use a dedicated customized software image You can either specify an existing software image which is to be configured for OpenStack nodes or provide the name for the new software image which will then be created The software image used for OpenStack nodes will have to be automatically customized by the wizard It is therefore highly advisable to create a new software image dedicated only for OpenStack nodes instead of sharing the same software image with n
64. roduction purposes is then 5 e 2 regular nodes running OSDs e 2 regular nodes running Ceph Monitors e 1 head node running a Ceph Monitor Drives usable by Ceph Ceph OSDs can use any type of disk that presents itself as a block device in Linux This means that a variety of drives can be used 3 2 Ceph Installation With cm ceph setup 3 2 1 cm ceph setup Ceph installation for Bright Cluster Manager can be carried out with the ncurses based cm ceph setup utility It is part of the cluster tools package that comes with Bright Cluster Manager If the Ceph packages are not already installed then the utility is able to install them for the head and regular nodes assuming the repositories are accessible and that the package manager priorities are at their defaults 3 2 2 Starting With Ceph Installation Removing Previous Ceph Installation The cm ceph setup utility can be run as root from the head node Bright Computing Inc 43 44 Ceph Installation Welcome to the Bright Cluster Manager Ceph setup Utiliti Uninstall Uninstall Ceph lt Jurit gt Figure 3 2 Ceph Installation Welcome At the welcome screen figure 3 2 the administrator may choose to e Set up Ceph e Remove Ceph if it is already installed General Ceph cluster settings Configure public network Cluster network Configure cluster network Journal size Default journal size Next Proceed to Monitors Back gt Figure 3 3 Ceph Ins
65. s the network isolation technology Bright Computing Inc 2 1 Installation Of OpenStack From cmgui in the preceding screen If the network isolation technology chosen was VLAN then a closely similar screen is shown instead For the VXLAN screen the following options are suggested with over rideable defaults as listed e VXLAN Range start default 1 e VXLAN Range end default 50000 The VXLAN range defines the number of user IP networks that can exist at the same time While the range can be set to be 16 million it is best to keep it to a more reasonable size such as 50 000 since a larger range slows down Neutron significantly An IP network is needed to host the VXLANSs and allow the tunneling of traffic between VXLAN endpoints This requires e either choosing an existing network that has already been config ured in Bright Cluster Manager but not internalnet e or it requires specifying the following in order to create the net work Base address default 10 161 0 0 Netmask bits default 16 Anew network Name default vxlanhostnet VXLAN networking uses a multicast address to handle broadcast traf fic in a virtual network The default multicast IP address that is set 224 0 0 1 is unlikely to be used by another application However if there is a conflict then the address can be changed using the CMDaemon OpenStackVXLANGroup directive Appendix C page 525 of the Admin istrator Manual Brigh
66. ser 1 Introduction 4 Bright managed instances 5 External Network 3 User Instances 6 Summary amp Deployment MySQL Credentials 4 OpenStack Admin User Please specify the password for the root user of the MySQL server on the head node This password will be used by the wizard and will then be forgotten it will not be stored anywhere The password is required to create individual MySQL databases for various OpenStack services Show password The main administrative user in an OpenStack cluster is the admin user Please specify the desired password for the admin user account Show password Cancel Previous Figure 2 3 MySQL Credentials amp OpenStack admin User Screen The MySQL and OpenStack credentials screen figure 2 3 allows the ad ministrator to set passwords for the MySQL root user and the Open Stack admin user 2 1 3 OpenStack Category Configuration 4 Bright managed instances 5 External Network 3 User Instances 6 Summary amp Deployment OpenStack Category Configuration OpenStack nodes will be distributed across a set of categories which will specialize them according to the roles they play in the OpenStack deployment compute network storage etc These categories can be specified at later stages At this stage please specify which existing category is to be used as the base for any newly created OpenStack categories later in the wizard OpensStack category defaul
67. software It is based on an object store layer called RADOS Reliable Autonomic Distributed Object Store which con sists of Ceph components called OSDs Object Storage Devices and MONs Monitoring Servers These components feature heavily in Ceph OSDs deal with storing the objects to a device while MONSs deal with mapping the cluster OSDs and MONs together carry out object storage and block storage within the object store layer The stack diagram of figure 3 1 illus trates these concepts CephFS RBD RADOS GW MDS OSD iii MON OS Hardware Figure 3 1 Ceph Concepts Bright Computing Inc Ceph Installation On top of the object store layer are 3 kinds of access layers 1 Block device access RADOS Block Device RBD access can be car ried out in two slightly different ways i via a Linux kernel module based interface to RADOS The mod ule presents itself as a block device to the machine running that kernel The machine can then use the RADOS storage that is typically provided elsewhere ii via the librbd library used by virtual machines based on qemu or KVM A block device that uses the library on the virtual machine then accesses the RADOS storage which is typically located elsewhere 2 Gateway API access RADOS Gateway RADOS GW access pro vides an HTTP REST gateway to RADOS Applications can talk to RADOS GW to access object storage in a high level manner instead of talking to RADOS directly at a low
68. specified data path Please note that you must specify a whole block device not a partition The Data path field can be used to specify data path for OSDs By default its value is var lib ceph osd cluster id where cluster is the name of the Ceph instance usually ceph and id is the unique OSD s id It is recommended to use the default value of the data path field Number of OSDs al Block devices Data path var 1ib ceph osd cluster id Journal path PAIRA Ei an aano e Sod Rite e Shor am eller ne Journal size e Journal on partition Shared journal device Shared journal size lt Back gt Figure 3 7 Ceph Installation OSDs Editing Block Device Path OSD Path Journals For Categories Or Nodes The Edit option in figure 3 6 opens up a screen figure 3 7 that allows the editing of the properties of existing or newly added Ceph OSDs for a node or category In this screen e When considering the Number of OSDsandtheBlock devices then it is best to set either the Number of OSDs or the Block devices Setting both the number of OSDs and block devices is also possi ble but then the number of OSDs must match the number of block devices e If only a number of OSDs is set and the block devices field is left blank then each OSD is given its own filesystem under the data path specified e Block devices can be set as a comma or a space separated list with no difference in meaning Exampl
69. t 7 Cancel Previous Figure 2 4 OpenStack Category Configuration Screen The category configuration screen figure 2 4 sets the node category that will be used as the template for several OpenStack categories that are going to be created by the wizard Bright Computing Inc OpenStack Installation 2 1 4 OpenStack Compute Hosts 4 Bright managed instances o External Network 3 User Instances 6 Summary amp Deployment OpenStack Compute Hosts In OpenStack deployments the majority of machines will likely be used to host the virtual machines managed by OpenStack These nodes will be referred to as the compute hosts In this screen a Bright category will be set up to hold all the compute hosts Do you want to use an existing category for the compute hosts or create a new category Using the existing category will effectively assign the OpenStack compute role to all the nodes already in that category Use existing category Create new category OpenStack compute hosts category openstack compute hosts Please select any additional nodes which are to be moved to the category specified above All Nodes Nodes node001 Filter gt matches mo an Figure 2 5 OpenStack Compute Hosts Screen The compute hosts configuration screen figure 2 5 allows the adminis trator to take nodes which are still available and put them into a category that will have a compute role The category can be
70. t Computing Inc 17 OpenStack Installation 2 1 14 Dedicated Physical Networks 1 Introduction 4 Bright managed instances 2 General questions 5 External Network 6 Summary amp Deployment Dedicated Physical Networks The VXLAN host network is the network used for transmitting the traffic of the user created virtual networks The VXLAN host network can be either set up on top of a dedicated network fabric or alternatively it can share the fabric with the main internal network internalnet Compute hosts All compute nodes need to have a network interface on the VXLAN Host network vxlanhostnet For the compute nodes which do not yet have such an interface do you want the wizard to create a dedicated physical interface on that network or do you want it to configure the node to share the interface connected to the internal network internalnet via an alias interface Share the interface with internalnet via an alias interface Create a dedicated physical interface Network node The network node needs to have an interface on the VXLAN host network vxlanhostnet Do you want the setup to create a dedicated physical interface on that network or do you want to share the interface connected to the internal network internalnet Share the interface with internalnet via an alias interface Create a dedicated physical interface Previous Figure 2 15 Dedicated Physical Networks Screen Th
71. tallation General Cluster Settings If the setup option is chosen then a screen for the general Ceph cluster settings figure 3 3 is displayed The general settings can be adjusted via subscreens that open up when selected The possible general settings are e Public network This is the network used by Ceph Monitoring to communicate with OSDs For a standard default Type 1 network this is internalnet e Private network This is the network used by OSDs to commu nicate with each other For a standard default Type 1 network this is internalnet e Journal size The default OSD journal size in MiBs used by an OSD The actual size must always be greater than zero This is a general setting and can be overridden by a category or node setting later on Defining a value of 0 MiB here means that the default that the Ceph software itself provides is set At the time of writing March 2015 Ceph software provides a default of 5GiB Bright Computing Inc 3 2 Ceph Installation With cm ceph setup Network Types are discussed in section 3 3 6 of the Installation Manual Selecting the Next option in figure 3 3 continues on with the next ma jor screen of the setup procedure and displays a screen for Ceph Monitors configuration figure 3 4 3 2 3 Ceph Monitors Configuration This section allows you to assign Ceph Monitor roles to categories or nodes After assigning a role its configuration can be edited or the role can be removed
72. ted with the configuration option of this screen If the Allow Outbound Traffic screen appears then specifying a single IP address for the Outbound IP value in the current screen fig ure 2 19 sets up the configuration to allow outbound connections only A decision was made earlier about allowing user instances to access the internal network of the cluster section 2 1 11 In the dialog of fig ure 2 19 if user instances are not enabled then the administrator is of fered the option once more to allow access to the internal network of the cluster by user instances Bright Computing Inc 23 OpenStack Installation 2 1 19 External Network Interface for Network Node External Network Interface for Network Node In order for the network node to provide routing functionality it needs a connection to the external network That connection could be set up using a dedicated interface or if the network node does not have an extra network interface available a tagged VLAN interface can be used The network node node001 does not have a interface configured on the external network externalnet lt needs to have one for the Floating IPs to work Do you want the wizard to configure node001 s network connectivity to the external network by creating a dedicated physical interface or by creating a tagged VLAN interface Create dedicated physical interface J Create tagged VLAN interface Select which base interface is to be used for
73. tegory The software im age that the virtual node category inherits from its base cate gory is set as the software image Setting the software image means that the wizard will copy over the properties of the associated base image to the new software image and configure the new software image with the required virtual ization modules The instance that uses this category then uses a modified image with virtualization modules that enable it to run on a virtual node e Ifthe Use existing category button is selected then The Virtual node category can be selected from the ex isting categories In a newly installed cluster the only possible value is default Setting the category means that the wizard will copy over the prop erties of the existing category to the new virtual category and con figure the new software image with the required virtualization mod ules The instance that uses the configured image is then able to run it on a virtual node The virtual nodes can be configured to be assigned one of the follow ing types of addresses e DHCP e Static The addresses in either case go in a sequence and begin with an ad dress that the administrator sets in the wizard Bright Computing Inc 21 22 OpenStack Installation 2 1 17 Inbound External Traffic Inbound External Traffic Enabling floating IPs makes both user and Bright managed instances accessible to inbound connections coming from the external ne
74. the ceph s command which can also be used for querying the status of the Ceph cluster Figure 3 8 Ceph Installation Completion After selecting the Finish option of figure 3 6 the Ceph setup proceeds On successful completion a screen as in figure 3 8 is displayed 3 3 Checking And Getting Familiar With Ceph Items After cm ceph setup 3 3 1 Checking On Ceph And Ceph related Files From The Shell The status of Ceph can be seen from the command line by running Example root bright70 ceph s cluster d9422c23 321e 4fa0 b510 ca8e09a0alfc health HEALTH _OK monmap el 1 mons at bright70 10 141 255 254 6789 0 election ep och 2 quorum 0 bright 70 osdmap e6 2 osds 2 up 2 in pgmap v9 192 pgs 3 pools O bytes data U objects 2115 MB used 18340 MB 20456 MB avail 192 active clean The h option to ceph lists many options Users of Bright Cluster Manager should usually not need to use these and should find it more convenient to use the cmgui or cmsh front ends instead Generated XML Configuration File By default an XML configuration file is generated by the cm ceph setup utility and stored after a run in the current directory as f Cm ceph setup confi g xml Bright Computing Inc 50 Ceph Installation The name of the Ceph instance is by default ceph If a new instance is to be configured with the cm ceph setup utility then a new name must be set in the configuration file and the new config
75. the Bright Cluster Manager 7 0 manuals are available on updated clusters by default at cm shared docs cm The latest updates are always online at http support brightcomputing com manuals e The Installation Manual describes installation procedures for a basic cluster e The Administrator Manual describes the general management of the cluster e The User Manual describes the user environment and how to submit jobs for the end user e The Cloudbursting Manual describes how to deploy the cloud capa bilities of the cluster e The Developer Manual has useful information for developers who would like to program with Bright Cluster Manager e The OpenStack Deployment Manual describes how to deploy Open Stack with Bright Cluster Manager e The Hadoop Deployment Manual describes how to deploy Hadoop with Bright Cluster Manager e The UCS Deployment Manual describes how to deploy the Cisco UCS server with Bright Cluster Manager If the manuals are downloaded and kept in one local directory then in most pdf viewers clicking on a cross reference in one manual that refers to a section in another manual opens and displays that section in the sec ond manual Navigating back and forth between documents is usually possible with keystrokes or mouse clicks For example lt A1t gt lt Backarrow gt in Acrobat Reader or clicking on the bottom leftmost navigation button of xpdf both navigate back to the previous document 1v Table of
76. the node or category Ceph Monitoring Properties Similarly the parameters of the cephmonitor role can be configured and managed from within the node object that runs Ceph Monitoring Example bright70 device use bright70 bright 70 gt device bright70 roles use cephmonitor ceph gt device bright70 gt roles cephmonitor show Parameter Value Extra config parameters Monitor data var lib ceph mon Scluster Shostname Name cephmonitor Bright Computing Inc 3 4 RADOS GW Installation Initialization And Properties Provisioning associations lt 0 internally used gt Revision Type CephMonitorRole Monitors in cmgui are similarly accessible for nodes and categories with an Advanced button in their Ceph tab allowing the parameters for the Monitor checkbox to be set Ceph bootstrap For completeness the bootstrap command within ceph mode can be used by the administrator to initialize Ceph Monitors on specified nodes if they are not already initialized Administrators are however not ex pected to use it because they are expected to use the cm ceph setup installer utility when installing Ceph in the first place The installer utility carries out the bootstrap initialization as part of its tasks The boot strap command is therefore only intended for use in the unusual case where the administrator would like to set up Ceph storage without using the cm ceph setup utility 3 4 RADOS GW Installation Initialization A
77. the tagged VLAN interface Cancel Previous Figure 2 20 External Network Interface for Network Node Screen The external network interface for network node screen figure 2 20 al lows the administrator to configure either a dedicated physical interface or a tagged VLAN interface for the network node The interface is to the external network and is used to provide routing functions for OpenStack The network node must have a connectivity with the external network when Floating IPs or and outbound traffic for instances is being config ured If the node already has a connection to the external network config ured in Bright the wizard will skip this step The options are e Create dedicated physical interface If this option is cho sen then a dedicated physical interface is used for connection from the network node to the external network Interface name The name of the physical node should be set For example et ho e Create tagged VLAN interface If this option is chosen then a tagged VLAN interface is used for the connection from the net work node to the external network Base interface The base interface is selected Typically the interface selected is BOOTIF Tagged VLAN ID The VLAN ID for the interface is set Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 2 1 20 VNC Proxy Hostname 1 Introduction 4 Bright managed instances 2 General questions 3 User Instances 6 Sum
78. to assign Leph USD roles to categories or nodes Please note that nodes that are to run USU0s must be down during the setup process dd Add OSD Edit Edit OS0s Remove Remove UsDs Finish Fun setup procedure Figure 3 6 Ceph OSDs Configuration If Proceed to OSDs is chosen from the Ceph Monitors configuration screen in figure 3 4 then a screen for Ceph OSDs configuration figure 3 6 is displayed where e OSDs can be added to nodes or categories On adding the OSDs must be edited with the edit menu Bright Computing Inc 3 2 Ceph Installation With cm ceph setup 47 e Existing OSDs can be edited or removed figure 3 7 from nodes or categories e To finish up on the installation after any changes to the OSD con figuration have been made the Finish option runs the Ceph setup procedure itself Editing Ceph OSDs Edit OSD role for node node002 You can specify either the number of OSDs or the list of block devices to be used by the OSDs You can also specify both but then the number of OSDs must match the number of block devices If you leave the block devices field blank then each OSD gets its own filesystem under the specified data path It is recommended to use a separate block device for each OSD A space separated list of block devices can be specified in the Block devices field e g sdb sdc In this case when a storage node boots dev sdbi and dev sdct will be formatted and mounted under the
79. to automatically create a dedicated default tenant for each user granted access to the OpenStack cloud Doing so will allow administrators to set individual resource limits for each user of the cloud a Yes Mo Cancel Figure 2 10 User Settings Screen TheUser Settings screen figure 2 10 sets whether OpenStack is to be configured so that with Bright Cluster Manager each user that is added to OpenStack gets a new tenant with default quotas resource limits set for the tenant If set this allows administrators to manage tenant quotas for each user If not set then each user will have to be assigned to a atenant before the user can log into OpenStack 2 1 10 User Instances 1 Introduction 4 Bright managed instances 2 General questions 5 External Network 6 Summary amp Deployment User Instances There are two classes of virtual instances User instances and Bright managed instances User instances are created by cluster end users using OpenStack API and or the OpensStack dashboard They can run any OpenStack Image Bright managed instances are created by the cluster administrators using the Bright CMDaemon API e g using CMSH or CMGUI They are provisioned using a Bright software image and thus have CMDaemon running and are managed just like regular Bright compute nodes Regular OpenStack end users are typically not able to create Bright Managed instances unless they are given access to Bright API If you decide to
80. twork Each instance can be accessed via a dedicated floating IP address Floating IPs are assigned to the instances from a preconfigured IP pool of available IP addresses The IP pool must be specified and cannot include the IP address of the external network s default gateway Enabling floating IPs also automatically enables outbound connectivity from instances even if they don t have a floating IP assigned OpenStack will reserve a single IP from the floating IP pool for those outbound connections Therefore ifthe OpenStack deployment is to have n floating IPs available to instances the floating IP allocation pool should span n 1 IP addresses Do you want to enable Floating IPs e Yes No IP range start 10 2 0 0 IPrange end 10 2 D D Cancel Previous Figure 2 18 Inbound External Traffic Screen All OpenStack hosted virtual machines are typically attached to one or more virtual networks However unless they also connected to the in ternal network of the cluster there is no simple way to connect to them from outside their virtual network To solve this Floating IPs have been introduced by OpenStack Floating IPs are a range of IP addresses that the administrator specifies on the external network and they are made available to the users tenants for assignment to their user instances The number of Floating IPs available to users is limited by the Floating IP quotas set for the tenants Administrators can also
81. u can assign IP addresses to your virtual nodes using DHCP assigned IP addresses using static IP addresses These will be in sequence starting from an address that you must specify Do you want to use DHCP or assign static IP addresses to your virtual nodes Statically Figure 2 47 DHCP Or Static IP Address Selection The instances can be configured to obtain their IP addresses either via DHCP or via static address assignment figure 2 48 2 2 20 Floating IPs Do you want to enable Floating IPs Enable floating IPs no Figure 2 48 Floating IPs The Floating IPs screen figure 2 48 lets the administrator enable Floating IPs on the external network so that instances can be accessed using these 2 2 21 External Network Starting Floating IP The network externalnet has Base IP address 10 2 0 0 Broadcast IP address 10 2 255 255 Usable addresses 655354 You have to specify the IP address range for Floating IPs What is the first IP address of the Floating IP address range 10 2 0 640 Figure 2 49 External Network Starting Floating IP A screen similar to figure 2 49 allows the administrator to specify the starting floating IP address on the external network Bright Computing Inc 37 38 OpenStack Installation 2 2 22 External Network Ending Floating IP First IP 10 2 0 64 now specify the last IP The network externalnet has Base IP address 10 2 0 0 Broadcast IP address 1
82. ult tenant setting screen figure 2 52 allows a new tenant to be set automatically with default quotas for each user that is granted access the OpenStack cloud This allows resource quotas to be Bright Computing Inc 2 2 Installation Of OpenStack From The Shell enforced 2 2 25 Nova Compute Hosts The Nova compute hosts will be the guest hosts of the virtual machines and are the core of your OpenStack deployment nova compute These nodes can also optionally run compute API endpoints nova api Which compute nodes to use for Nova compute hosts default node001 Figure 2 53 Nova Compute Hosts The Nova compute hosts screen figure 2 53 prompts the administrator to set the nodes to use as the hosts for the virtual machines 2 2 26 Neutron Network Node The Neutron network node is responsible for DHCP floating IPs and routing within the virtual network infrastructure neutron dhcp agent neutron 13 agent neutron metadata agent The network node can be shared also with other OpenStack nodes e g on small deployments it can be the same as one of the nova compute hosts The neutron server process is always run on the headnode s Which compute node to use for the network node default node001 nocie0011 Figure 2 54 Neutron Network Node The Neutron network node screen figure 2 54 prompts the administrator to set the node to use for Neutron network node 2 2 27 Pre deployment Summary OpenStack is rea
83. up The cm openstack setup utility is a less preferred alternative to the installation of OpenStack from cmgui The cm openstack setup utility is a part of the standard cluster tools package Details on its use are given in its manual page man 8 cm openstack setup When run the regular nodes that are to run OpenStack instances are rebooted by default at the end of the dialogs in order to deploy them A prerequisite for running cm openstack setup is that the head node should be connected to the distribution repositories A sample cm openstack setup wizard session is described next O Bright Computing Inc 27 OpenStack Installation starting from section 2 2 1 The session runs on a cluster consisting of one head node and one regular node The wizard can be interrupted gracefully with a lt ctrl c gt 2 2 1 Start Screen This utility can be used to create a new OpenStack cloud deployment deploy OpenStack remove remove OpenStack deployment exit exit the utility Figure 2 25 Start Screen The start screen figure 2 25 lets the administrator e deploy Bright Cluster Manager OpenStack e remove Bright Cluster Manager s OpenStack if it is already on the cluster e exit the installation Removal removes OpenStack related database entries roles networks virtual nodes and interfaces Images and categories related to OpenStack are however not removed 2 2 2 Informative Text Prior To Deployment This wizard wi
84. uration file must be used Using An XML Configuration File The c option to cm ceph setup allows an existing XML configuration file to be used Example root bright70 cm ceph setup c root myconfig xml A Sample XML Configuration File A sample schema for configuration can be seen at com local apps cluster tools other samplecephconf xml Installation Logs Installation logs to Ceph are kept at var log cm ceph setup log 3 3 2 Ceph Management With cmgui And cmsh Only one instance of Ceph is supported at a time Its name is ceph Ceph Overview And General Properties From within cmsh ceph mode can be accessed Example root bright 70 lt lt cmsh bright70 ceph bright 70 gt ceph From within ceph mode the overview command lists an overview of Ceph OSDs MONs and placement groups for the ceph instance Parts of the displayed output are elided in the example that follows for viewing convenience Example bright 70 gt ceph overview ceph Parameter Value Status HEALTH_OK Number of OSDs 2 Number of OSDs up 2 Number of OSDs in 2 Number of mons 1 Number of placements groups 192 Placement groups data size OB Placement groups used size 100GB Placement groups available size 9 91GB Placement groups total size 19 986B Name Used Objects Bright Computing Inc 3 3 Checking And Getting Familiar With Ceph Items After cm ceph setup 51 bright70 rgw bright 70idata bright70 metadata bragqght70s
85. with the OpenStack compute host role ends up with OpenStack installed on it after the operating sys tem running on the node is updated and the newly configured interfaces are set up The update can be carried using imageupdate section 5 6 of the Administrator Manual or by rebooting the regular node A reboot is however required if the interfaces have been changed which is normally the case unless the script is being run after a first run has already set up the changes A reboot of the regular node is therefore normally a recom mended action because it ensures the updates and the interface changes have been carried out on the node The reboot action is carried about by default as shown in the preceding output while the option is set by the cm openstack setup script page 30 Next the administrator can further configure the cluster to suit re quirements Bright Computing Inc Ceph Installation 3 1 Ceph Introduction Ceph at the time of writing is the recommended storage software for OpenStack for serious use The Ceph RADOS Gateway is a drop in re placement for Swift with a compatible API Ceph is the recommended backend driver for Cinder Glance and Nova The current chapter discusses e The concepts and required hardware for Ceph section 3 1 e Ceph installation and management section 3 2 e RADOS GW installation and management section 3 4 3 1 1 Ceph Object And Block Storage Ceph is a distributed storage
86. y can be stored either on a NFS share cm shared by default or in Ceph Do you want to usel Ceph to store OpenStack images m Figure 2 31 Image Storage With Ceph Ceph can be set for storing virtual machine images instead of the Open Stack reference Glance using the Ceph image storage screen figure 2 31 Bright Computing Inc 2 2 Installation Of OpenStack From The Shell Block Storage With Ceph OpenStack Cinder needs a place to store the block volumes Volumes can be stored either on a NFS share om shared by default or in Ceph Do you want to use Ceph to store OpenStack volumes m Figure 2 32 Block Storage With Ceph Ceph can be set for handling block volume storage read and writes in stead of the OpenStack reference Cinder by using the Ceph for Open Stack volumes screen figure 2 32 Root And Ephemeral Device Storage With Ceph OpenStack Nova compute needs a place to store the root and ephemeral devices of VMs These can be stored either in Ceph or on a local mountpoint of every compute hosts Do you want to use Ceph to store root and ephemeral disks of vms m Figure 2 33 Root And Ephemeral Device Storage With Ceph Data storage with Ceph can be enabled by the administrator by using the Ceph for OpenStack root and ephemeral device storage screen fig ure 2 33 Ceph Object Gateway Ceph RADOS Gateway RADOS gateway as known as Ceph Object Gateway is an HTTP gateway for the Ce

Download Pdf Manuals

image

Related Search

Related Contents

Softwarehandbuch, 1, de_DE  

Copyright © All rights reserved.
Failed to retrieve file