Home
Mellanox OpenStack Solution Reference Architecture
Contents
1. Virtual NICs vNICs eH EH vPorts with multi level QoS CHE EH COCH EH EH HA CH E Lei Lei Lei ps EP VLAN and Priority tagging Security Filters ACLs T Embedded Switch HW Based Ethernet Switching NIC HCA pPort QoS and DCB Physical Ports pPort eSwitch main capabilities and characteristics e Virtual switching creating multiple logical virtualized networks The eSwitch offload engines handle all networking operations up to the VM thereby dramatically reducing software overheads and costs e Performance The switching is handled in hardware as opposed to other applications that use a software based switch This enhances performance by reducing CPU overhead 11 Mellanox Technologies Rev 1 3 Network Virtualization on Mellanox Adapters 3 1 e Security The eSwitch enables network isolation using VLANS and anti MAC spoofing e Monitoring Port counters are supported Performance Measurements Many data center applications benefits from low latency network communication while others require deterministic latency Using regular TCP connectivity between VMSs can create high latency and unpredictable delay behavior Figure 5 shows the dramatic difference 20X improvement delivered by SR IOV connectivity running RDMA compared to para virtualized vNIC running a TCP stream Using the direct connection of the SR IOV and the ConnectX 3 hardware eliminates the
2. 15 45 ek der ie Uer e E 16 9 Setting Up the NetWork ieiuna uuo oue oaa cuo eu i 17 5 1 Configuration Examples rr rr nennen RR rr RR RR nnns nna nnns nna arn 17 odes Gresting a Ee 17 5 1 2 Creating a Para Virtualized vNIC Instance i 18 5ta Creating anm EE ET EE er 20 5 1 4 Creating a Volume 22 0 15 Attaching A VOM Sad ias 23 52 Vermicalon Ten e EE 23 52 1 Instances e EE 23 5 2 2 Rule Ee EE e 23 523 Molume EE 24 d 3 Mellanox Technologies Rev 1 3 Solution Overview List of Figures Figure 1 Mellanox OpenStack Architecture rr rr rn rar rr rnr rr RR rn RKA KRA RR nnns 8 Figure 2 OpenStack Based laaS Cloud POD Deployment Example i 9 Foure S RDMA tee Sao RN TERR 10 FIGUFe 4 lt ESWIICR AFCHITECIUe A m t 11 Figure 5 e Te Aerei em EEN 12 Figure 6 Network Vutualtzaton nr rr renar rr rr RR KARA RR RR RAR nnn nennen nan nsns 13 Figure 7 Mellanox MCX314A BCBT ConnectX 3 40GbE Adapnter 14 Figure 8 Mellanox SX1036 36x40GDE ENEE 15 Figure 9 Mellanox 40GbE QSFP Copper Cable ussssssssrsrrsrssssssrrrrrnssnarrrrrrnnnnn rr rn rr nn nn rr ARR RR non RR RR RAR nnns 15 Figure 10 OpenStack RE eet gehen 18 Figure 11 OpenStack Dashboard Launch Instance rn renen nns rr rr rn RR Rn rann 19 Figure 12 OpenStack Dashboard Launch Interface Select Network 19 Figure 13 Openstack Dashboard Volumes ii ida 22 Figure 14 OpenStack
3. Figure 16 OpenStack Dashboard Manage Volume Attachments Manage Volume Attachments Attachments Instance Device Actions Mo items to display Displaying 0 items Attach To Instance Attach to Instance Device Name vmi 55da5294 af09 4214 a056 6500e45f d94 dev vdc Cancel Attach Volume 5 2 Verification Examples 5 2 1 Instances Overview Use the OpenStack Dashboard to view all configured instances Figure 17 VM Overview O All Instances openstack Instances Lee Project Host Name IP Address Size Status Task Power State Actions 5 2 2 Connectivity Check There are many options for checking connectivity between instances one of which is to simply open a remote console and ping the required host 23 Mellanox Technologies Rev 1 3 Setting Up the Network To launch a remote console for a specific instance select the Console tab and launch the console Figure 18 Remote Console Connectivity O Instance Detail vm1 openstack Instance Console If console is not responding to keyboard input click the grey status bar below Click here to show only console Connected unencrypted to QEMU instance 00000026 Send CtrlAltDel Red Hat Enterprise Linux Server release 6 3 Santiago Kernel 2 6 32 279 e16 x86 64 on an x86 64 vmi login root Password Last login Thu Apr 4 14 25 5 root umi 14 ping 192 LL PING 192 168 283 4 192 3 4 56 84 bytes of data 64 bytes from 192 seq i ttl 64
4. Mellanox TECHNOLOGIES Mellanox Technologies Mellanox Technologies Ltd 350 Oakmead Parkway Suite 100 Beit Mellanox Sunnyvale CA 94085 PO Box 586 Yokneam 20692 U S A Israel www mellanox com www mellanox com Tel 408 970 3400 Tel 972 0 74 723 7200 EE Fax 972 0 4 959 3245 O Copyright 2014 Mellanox Technologies All Rights Reserved Mellanox Mellanox logo BridgeX ConnectX CORE Direct InfiniBridgeG InfiniHost InfiniScale MLNX OS PhyX SwitchX UFM Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies Ltd Connect IB FabricIT Mellanox Open Ethernet Mellanox Virtual Modular Switch MetroX MetroDX ScalableHPC Unbreakable Link are trademarks of Mellanox Technologies Ltd All other trademarks are property of their respective owners 2 Document Number 15 3440 Mellanox Technologies Mellanox OpenStack Solution Reference Architecture Rev 1 3 Contents 1 SOU MON O VOI EET t 7 Accelerating Storage aria 9 Network Virtualization on Mellanox Adapters ccccsssscessseseeeeeeseeeeeesseeseenseeseeeneeeeensseesoeasenses 11 3 1 Performance Measurements messi rr ren aaa 12 3 2 Seamless Integration ii 13 4 Setup and Nstalatlo tee 14 At Hardware e W I Ee E 14 4 2 Software Requirements e 15 SSC WEN Ge RUE E 15 4 4 OpenStack Software Installation
5. complying with OpenStack requirements 4 3 Prerequisites 1 Hardware is set up To reduce the number of ports in the network two different subnets can be mapped to the same physical interface on two different VLANS 2 Mellanox OFED 2 0 3 SR IOV enabled is installed on each of the network adapters For Mellanox OFED installation refer to Mellanox OFED User Manual Installation chapter http www mellanox com page products_dyn product_family 26 Visit Mellanox Community for verification options and adaptation http community mellanox com docs DOC 1317 3 The OpenStack packages are installed on all network elements 4 EPEL repository is enabled http fedoraproject org wiki EPEL 4 4 OpenStack Software Installation For Mellanox OpenStack installation visit the Mellanox OpenStack wiki pages at https wiki openstack org wiki Mellanox OpenStack 15 Mellanox Technologies Rev 1 3 Setup and Installation 4 5 Troubleshooting Troubleshooting actions for OpenStack installation with Mellanox plugins can be found at http community mellanox com docs DOC 1 127 A Mellanox Technologies Confidential Mellanox OpenStack Solution Reference Architecture Rev 1 3 5 Setting Up the Network 5 1 Configuration Examples Once installation is completed the network must be set up Setting up a network consists of the following steps Creating a network 2 Creating a VM instance Two types
6. in the example to create a port with vnic_type hostdev eneutron port create net example baindingiprofile type dict vnaic type hostdev Created a new port admin state up binding capabilities port filter false binding host id binding profile physical network default binding vif type hostdev device id device owner fixed ips subnet id 3c9fflae 218dqd 4020 b065 a2991qd23bb72 Ip ddress i PAZ O La id a43d435 3 3870 4ael 9a9d d2d4341b693d6 mac address fa 16 3e 67 ad ef name network id l6bI90d6 c4t5a 4139 al190 7 599f591D00696 Status DOWN tengan cd 679545ff6cle440ladcafa0857aefe2e dE EE 20 J Mellanox Technologies Confidential Mellanox OpenStack Solution Reference Architecture Rev 1 3 2 Use the command nova boot to launch an instance with the created port attached nova boot flavor ml small image rh6 4p nic port 1d a43d35f 3 3870 4ael 9a9d d2d341b693d6 vm3 v o MR A PEE MED E Property beer e E P PP PE PAL OS EXT STS task state image OS EXT STS vm state OS EAT SRY ATTIRE instance name OS SRV USG launched at flavor id security groups user id OS DCF diskConfig accessIPv4 accessIPv6 progress OS EXISTS power state OS EXT AZ availability zone config drive status updated hostid OS EXT SRV ATTR hOost OS SRV USG terminated at key name OS EXT SRV ATTR hypervisor hostname name adminPass tenant id created Osg extended volume
7. software processing that adds an unpredictable delay to packet data movement The result is a consistently low latency that allows application software to rely on deterministic packet transfer times Figure 5 Latency Comparison MM ta VM same machine TOP PY gt Vb to VM 2 machines TOP PY VM io VM same machine ROMA Vi fo VM 2 machines ROMA ie Physical ta Physical ROMA Para virtual is non predictable bx Latency us 16 dz 54 128 256 512 1034 2048 4096 B192 16384 Message Size Bytes ARI Mellanox Technologies Confidential Mellanox OpenStack Solution Reference Architecture Rev 1 3 3 2 Seamless Integration The eSwitch configuration is transparent to the OpenStack controller administrator The installed eSwitch daemon on the server is responsible for hiding the low level configuration The administrator will use the standard OpenStack dashboard APIs REST interface for the fabric management The Neutron agent configures the eSwitch in the adapter card Figure 6 Network Virtualization Servers VM VM VM Tm D e aD gedet y Set vNICIVF per VM virtual V e q VM Neutron Plug In y Embedded 10 40GbE Switch Controller 13 Mellanox Technologies Rev 1 3 Setup and Installation 4 Setup and Installation The OpenStack environment should be installed according to the OpenStack document
8. Dashboard Create VOIUMES 22 Figure 15 OpenStack Dashboard Volumes Deer for NR nnns 22 Figure 16 OpenStack Dashboard Manage Volume Attachments 23 Figure Tz Oe s dare dida 23 Figure To Hemole Console CONASCUIV EE 24 Figure 19 OpenStack Dashboard Volumes rr rr rr rr rr rn nennen RR KAR RR nnn nnns 24 Figure 20 OpenStack Dashboard Console nennen nnne nnns 24 c ttmkbktLKdeellEE IIT Mellanox Technologies Confidential Mellanox OpenStack Solution Reference Architecture Rev 1 3 Preface About this Document This reference design presents the value of using Mellanox interconnect products and describes how to integrate the OpenStack solution Havana release or later with the end to end Mellanox interconnect products Audience This reference design is intended for server and network administrators The reader must have experience with the basic OpenStack framework and installation References For additional information see the following documents Table 1 Related Documentation Reference Location Mellanox OFED User Manual www mellanox com Products Adapter IB VPI SW Linux SW Drivers http www mellanox com content pages php pg products dy n amp product_family 26 amp menu_section 34 Mellanox software source packages https github com mellanox openstack Mellanox OpenStack wiki page https wiki openstack org wiki Mellanox OpenStack Mellanox Ethernet Switch Systems
9. Management Network Network Nodes Y Storage Network lt gt Service Network Firewall lt gt Public Network ALL uli Public Network L3 Agent Adapter ACES Mellanox Technologies Confidential Mellanox OpenStack Solution Reference Architecture Rev 1 3 2 Accelerating Storage Data centers rely on communication between compute and storage nodes as compute servers read and write data from the storage servers constantly In order to maximize the server s application performance communication between the compute and storage nodes must have the lowest possible latency highest possible bandwidth and lowest CPU utilization Figure 2 OpenStack Based laaS Cloud POD Deployment Example Compute Servers Storage Servers VM VM VM a ap ap ap 3 Hypervisor KVM SR a A aa Adapter Local Disks ta o Switching Fabric Storage applications rely on iSCSI over TCP communications protocol stack continuously interrupt the processor in order to perform basic data movement tasks packet sequence and reliability tests re ordering acknowledgements block level translations memory buffer copying etc This causes data center applications that rely heavily on storage communication to suffer from reduced CPU efficiency as the processor is busy sending data to and from the storage servers rather than performing application processing The data path for app
10. time 1 78 ms 64 bytes from 192 16t 4 seq 2 ttl 64 time 0 494 ms 192 168 283 4 ping stati 2 packets transmitted 2 received 8 packet loss time 1336ms rtt min avg max mdev 14171 7887 8 647 ms rootBuymi 1 92 T ch PING 192 168 2 gt 3 5 56 84 bytes of data 64 bytes from cmp_seq 1 tt1 64 time 9 969 ms 64 bytes from 192 168 293 5 seq 2 tt1 64 time 8 359 ms UM 192 168 283 5 ping e 2 packets transmitted 2 received 8 packet loss time 1398m rtt min avg max mdev 3 3597 8 664 8 969 8 385 m root um1 14 _ 5 2 3 Volume Check To verify that the created volume is attached to a specific instance click the Volumes tab Figure 19 OpenStack Dashboard Volumes Volumes Info Attaching volume vol1 to instance m1 on dev vdc Volumes Create Volume CONS Name Description Size Status Type Attached To Actions vol1 5GB In Use Attached to vm1 on dev vdc Edit Attachments Displaying 1 item In addition run the disk command from the instance console to see the volume details Figure 20 OpenStack Dashboard Console Instance Detail vmi Instance Console If console is not responding to Keyboard input click the grey status D root8192 168 283 3 1t fdisk Disk dev vda 2147 MB 2147483648 bytes 16 heads 63 sectors track 4161 cylinders Units cylinders of 1888 512 516896 bytes Sector size logical physical 512 bytes 512 bytes 1 0 size minimum optimal 512 bytes 512 bytes Disk identifi
11. 000 4 PCle Limit 6000 5000 SER Phsical Write GER 4 VMs Write A SER 8 VMs Write 6X iSER 16 VMs Write iSCSI Write 8 vms iSCSI Write 16 VMs 4000 3000 2000 Bandwidth MB s 1000 1 2 4 8 16 32 64 128 256 I O Size KB dE i JJ Mellanox Technologies Confidential Mellanox OpenStack Solution Reference Architecture Rev 1 3 3 Network Virtualization on Mellanox Adapters Single Root IO Virtualization SR IOV allows a single physical PCIe device to present itself as multiple devices on the PCIe bus Mellanox ConnectX 9 3 adapters are capable of exposing up to 127 virtual instances called Virtual Functions VFs These virtual functions can then be provisioned separately Each VF can be viewed as an additional device associated with the Physical Function It shares the same resources with the Physical Function and its number of ports equals those of the Physical Function SR IOV is commonly used in conjunction with an SR IOV enabled hypervisor to provide virtual machines with direct hardware access to network resources thereby improving performance Mellanox ConnectX 3 adapters equipped with onboard embedded switch eSwitch are capable of performing layer 2 switching for the different VMs running on the server Using the eSwitch will gain even higher performance levels and in addition improve security and isolation Figure 4 eSwitch Architecture n D LI
12. Mellanox TECHNOLOGIES Mellanox OpenStack Solution Reference Architecture Rev 1 3 January 2014 www mellanox com Mellanox Technologies NOTE THIS HARDWARE SOFTWARE OR TEST SUITE PRODUCT PRODUCT S AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS IS WITH ALL FAULTS OF ANY KIND AND SOLELY FOR THE PURPOSE OF AIDING THE CUSTOMER IN TESTING APPLICATIONS THAT USE THE PRODUCTS IN DESIGNATED SOLUTIONS THE CUSTOMER S MANUFACTURING TEST ENVIRONMENT HAS NOT MET THE STANDARDS SET BY MELLANOX TECHNOLOGIES TO FULLY QUALIFY THE PRODUCT S AND OR THE SYSTEM USING IT THEREFORE MELLANOX TECHNOLOGIES CANNOT AND DOES NOT GUARANTEE OR WARRANT THAT THE PRODUCTS WILL OPERATE WITH THE HIGHEST QUALITY ANY EXPRESS OR IMPLIED WARRANTIES INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY FITNESS FOR A PARTICULAR PURPOSE AND NON INFRINGEMENT ARE DISCLAIMED IN NO EVENT SHALL MELLANOX BE LIABLE TO CUSTOMER OR ANY THIRD PARTIES FOR ANY DIRECT INDIRECT SPECIAL EXEMPLARY OR CONSEQUENTIAL DAMAGES OF ANY KIND INCLUDING BUT NOT LIMITED TO PAYMENT FOR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES LOSS OF USE DATA OR PROFITS OR BUSINESS INTERRUPTION HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY WHETHER IN CONTRACT STRICT LIABILITY OR TORT INCLUDING NEGLIGENCE OR OTHERWISE ARISING IN ANY WAY FROM THE USE OF THE PRODUCT S AND RELATED DOCUMENTATION EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE
13. ation package For OpenStack Havana release the following installation changes should be applied e A Neutron server should be installed with the Mellanox Neutron plugin e Mellanox Neutron agent eSwitch daemon and Nova VIF driver should be installed on the compute notes For OpenStack Grizzly release the following installation changes should be applied e A Neutron server should be installed with the Mellanox Neutron plugin e A Cinder patch should be applied to the storage servers for SER support e Mellanox Neutron agent eSwitch daemon and Nova VIF driver should be installed on the compute notes 4 1 Hardware Requirements e Mellanox ConnectX 3 adapter cards e 10GbEor40GbE Ethernet switches e Cables required for the ConnectX 3 card typically using SFP connectors for 10GbE or QSFP connectors for 40GbE e Server nodes complying with OpenStack requirements e Compute nodes with SR IOV capability BIOS and OS support In terms of adapters cables and switches many variations are possible Visit www mellanox com for more information Figure 7 Mellanox MCX314A BCBT ConnectX 3 40GbE Adapter III Mellanox Technologies Confidential Mellanox OpenStack Solution Reference Architecture Rev 1 3 Figure 8 Mellanox SX1036 36x40GbE Figure 9 Mellanox 40GbE QSFP Copper Cable 4 2 Software Requirements e Supported OS e RHEL 6 4 or higher e Mellanox OFED 2 0 3 SR IOV support or higher e KVM hypervisor
14. ces and tenant networks These also require multiple switches cabling and management infrastructure which increases both up front and maintenance costs Other more advanced offerings provide a unified adapter and first level ToR switch but still run multiple and independent core fabrics Such offerings tend to suffer from low throughput because they do not provide the aggregate capacity required at the edge or in the core and because they deliver poor application performance due to network congestion and lack of proper traffic 1solation Several open source cloud operating system initiatives have been introduced to the market but none has gained sufficient momentum to succeed Recently OpenStack has managed to establish itself as the leading open source cloud operating system with wide support from major system vendors OS vendors and service providers OpenStack allows central management and provisioning of compute networking and storage resources with integration and adaptation layers allowing vendors and or users to provide their own plug ins and enhancements Mellanox Technologies offers seamless integration between its products and OpenStack layers and provides unique functionality that includes application and storage acceleration network provisioning automation hardware based security and isolation Furthermore using Mellanox interconnect products allows cloud providers to save significant capital and operational expenses th
15. dware based acceleration for messaging network traffic and storage e Easy to manage via standard APIs Native integration with OpenStack Neutron network and Cinder storage provisioning APIs e Provides tenant and application security isolation end to end hardware based traffic isolation and security filtering e Mellanox designed its end to end OpenStack cloud solution to offer seamless integration between its products and OpenStack services e By using Mellanox 10 40GbE and FDR 56Gb s adapters and switches with OpenStack Havana release customers can gain significant improvement in block storage access performance with Cinder In addition customers can deploy an embedded virtual switch to run virtual machine traffic with bare metal performance provide hardened security and QoS all with simple integration e Mellanox has partnered with the leading OpenStack distributions to allow customers to confidently deploy an OpenStack cloud with proven interoperability and integrated support For more information click here Figure 1 Mellanox OpenStack Architecture Controller Nodes OpenStack Management Services Mellanox Neutron Plugin Compute Servers Adaptor GERI T e OpenStack Cinder Storage Servers VM VM VM lt gt lt gt AA O gt Service Storage Network Do E a Adapter KEY Neutron With iSER Agent RDMA gt 40GbE FDR ROMA Raid Mellanox Adapter i 4 4 x lt gt
16. er BxB89848fc6 Device Boot Start End Blocks Id System dev udal 489 264888 82 Linux swap Solaris Partition 1 does not end on cylinder boundary sdevsvdaZ 499 4162 1891328 83 Linux Partition 2 does not end on cylinder boundary Disk dev v b 5368 MB 53687899128 butes 16 heads 63 sectors track 18482 culinders Units cylinders of 1998 512 5168996 bytes Sector size logical physical 512 bytes 512 bytes 1 0 size minimum optimal 512 bytes 7 512 bytes Disk identifier 8x 8888888 root 19Z 168 2893 3 18 24 Mellanox Technologies Confidential
17. http www mellanox com related docs user_manuals SX10X User Manual X_User_Manual pdf Mellanox Ethernet adapter cards http www mellanox com page ethernet_cards_overview Solutions space on Mellanox http community mellanox com community support solutions community OpenStack RPM package http community mellanox com docs DOC 1 187 Mellanox eSwitchd Installation for http community mellanox com docs DOC 1 126 OpenFlow and OpenStack 5 Mellanox Technologies Rev 1 3 Solution Overview Troubleshooting http community mellanox com docs DOC 1 127 Mellanox OFED Driver Installation http community mellanox com docs DOC 1317 and Configuration for SR IOV Revision History Table 2 Document Revision History 1 3 Jan 2014 Removed OpenFlow sections Added related topics for OpenStack Havana release June 2013 Added OpenFlow feature III Mellanox Technologies Confidential Mellanox OpenStack Solution Reference Architecture Rev 1 3 1 Solution Overview Deploying and maintaining a private or public cloud is a complex task with various vendors developing tools to address the different aspects of the cloud infrastructure management automation and security These tools tend to be expensive and create integration challenges for customers when they combine parts from different vendors Traditional offerings suggest deploying multiple network and storage adapters to run management storage servi
18. lications and system processes must wait in line with protocols such as TCP UDP NFS and iSCSI for their turn using the CPU This not only slows down the network but also uses system resources that could otherwise have been used for executing applications faster Mellanox OpenStack solution extends the Cinder project by adding iSCSI running over RDMA SER Leveraging ROMA Mellanox OpenStack delivers 6X better data throughput for example increasing from I1 GB s to 5GB s and while simultaneously reducing CPU utilization by up to 80 see Figure 3 Mellanox ConnectX 3 adapters bypass the operating system and CPU by using RDMA allowing much more efficient data movement iSER capabilities are used to accelerate hypervisor traffic including storage access VM migration and data and VM replication The use of RDMA shifts data movement processing to the Mellanox ConnectX 3 hardware which provides zero copy message transfers for SCSI packets to the application producing significantly faster performance lower network latency lower access time and lower CPU overhead iSER can provide 6X faster performance than traditional TCP IP based iSCSI The 9 Mellanox Technologies Rev 1 3 Accelerating Storage iSER protocol unifies the software development efforts of both Ethernet and InfiniBand communities and reduces the number of storage protocols a user must learn and maintain RDMA bypass allows the application data path to effecti
19. nstances Instances openstack Instances Launch Instance A Launch Instance Instance Name IP Address Size Keypair Status Task Power Sate Actions admin eg Mellanox Technologies Confidential Mellanox OpenStack Solution Reference Architecture Rev 1 3 Figure 11 OpenStack Dashboard Launch Instance Launch Instance Details Instance Source Specify the details for launching an instance The chart below shows the resources used by this project in relation to the project s quotas Image Flavor Details h6 3 z Name m1 tiny Instance Name VCPUs 1 vm1 Root Disk 0 GB Flavor Ephemeral Disk 0 GB m1 tiny v Total Disk 0 GB RAM 512 MB Instance Count 1 Project Quotas Number of Instances 0 Number of VCPUs 0 Total RAM 0 MB Cancel Launch 3 Select the desired network for the vNIC net3 in the example Figure 12 OpenStack Dashboard Launch Interface Select Network Launch Instance Networking volume Options Post Creation Selected Networks nic 1 net ST AST ORSSS AT Temi se Choose network from Available networks to Selected Networks by push button or drag and drop you may change nic order by drag and drop as well Available networks Met za Aessen Ae Cancel Launch 19 Mellanox Technologies Rev 1 3 Setting Up the Network 5 1 3 Creating an SR IOV Instance 1 Use the command neutron port create for the selected network net3
20. of instances can be created i Para virtualized vNIC 11 SR IOV direct path connection 3 Creating disk volume 4 Binding the disk volume to the instance created 5 1 1 Creating a Network Use the commands neutron net create and neutron subnet create to create a new network and a subnet net example in the example Sneutron net create net example Created a new network Field AS A admin state up id 16b790d6 4 5a 4739 a190 7598 331b696 name net example provider network type vlan provider physical network default provider segmentation id 4 shared False Status ACTIVE subnets tenant id 679545ff6cle4401adcafa0857aefe2e Y M AE d 17 Mellanox Technologies Rev 1 3 Setting Up the Network 5 1 2 Sneutron subnet create net example 192 168 199 0 24 Created a new subnet QD start 192 168 199 2 ends 192 166 199 254 192 168 199 0 24 allocation pools cidr dns nameservers enable dhcp gateway ip True d ee E host routes id ip version 3c9fflae 218d 4020 0b065 a2991d823bb72 4 name 160790d6 4 5a 4739 a190 7598 331b696 6 9545 6cle440ladcafa0857aefe2e network id Creating a Para Virtualized vNIC Instance 1 2 Using the OpenStack Dashboard launch an VM instance using the Launch Instance button Insert all the required parameters and click Launch This operation creates a macvtap interface on top of a Virtual Function VF Figure 10 OpenStack Dashboard I
21. rough network and I O consolidation and by increasing the number of virtual machines VMs per server Mellanox provides a variety of network interface cards NICs supporting one or two ports of IOGbE 40GbE or 56Gb s InfiniBand These adapters simultaneously run management network storage messaging and clustering traffic Furthermore these adapters create virtual domains within the network that deliver hardware based isolation and prevent cross domain traffic interference In addition Mellanox Virtual Protocol Interconnect VPI switches deliver the industry s most cost effective and highest capacity switches supporting up to 36 ports of 56Gb s When deploying large scale high density infrastructures leveraging Mellanox converged network VPI solutions translates into fewer switching elements far fewer optical cables and simpler network design Mellanox plugins are inbox for Havana release The Havana release includes out of the box support for InfiniBand and Ethernet Mellanox components for Nova Cinder and Neutron d 7 Mellanox Technologies Rev 1 3 Solution Overview Mellanox integration with OpenStack provides the following benefits e Cost effective and scalable infrastructure that consolidates the network and storage to a highly efficient flat fabric increases the VM density commoditizes the storage infrastructure and linearly scales to thousands of nodes e Delivers the best application performance with har
22. s volumes attached metadata scheduling rh6 4p building instance 00000042 None ml small l61da6a9 6508 4e23 9f6f 881383461ab4 u default b94edf2504c84223b58e254314528902 MANUAL u name BUILD 2013 12 19T07 32 422 None None None None vm3 tiTE37tQrNBn 679545ff6cle4401adcafa0857aefe2e 2013 12 19T07 32 412 d 21 Mellanox Technologies Rev 1 3 Setting Up the Network 5 1 4 Creating a Volume Create a volume using the Volumes tab on the OpenStack dashboard Click the Create Volume button Figure 13 OpenStack Dashboard Volumes O Volumes openstack Volumes Create Volume Name Description Size Status Type Attached To Actions Project No items to display Displaying 0 items admin Figure 14 OpenStack Dashboard Create Volumes Create Volume Volume Name Description vol1 Volumes are block devices that can be attached to instances Description Volume Quotas Total Gigabytes 0 GB Number of Volumes 0 Size GB Figure 15 OpenStack Dashboard Volumes Volumes Em Info Creating volume vol1 Volumes Create Volume CS Name Description Size Status Type Attached To Actions vol1 5GB Available Edt Attachments More Displaying 1 tem 22 Mellanox Technologies Confidential Mellanox OpenStack Solution Reference Architecture Rev 1 3 5 1 5 Attaching a Volume Attach a volume to the desired instance The device name should be dev dv lt letter gt E g dev vdc
23. vely skip to the front of the line Data is provided directly to the application immediately upon receipt without being subject to various delays due to CPU load dependent software queues This has three effects e There is no waiting which means that the latency of transactions is incredibly low e Because there is no contention for resources the latency is deterministic which is essential for offering end users a guaranteed SLA e Bypassing the OS using RDMA results in significant savings in CPU cycles With a more efficient system in place those saved CPU cycles can be used to accelerate application performance In the following diagram it is clear that by performing hardware offload of the data transfers using the SER protocol the full capacity of the link is utilized to the maximum of the PCIe limit To summarize network performance is a significant element in the overall delivery of data center services and benefits from high speed interconnects Unfortunately the high CPU overhead associated with traditional storage adapters prevents systems from taking full advantage of these high speed interconnects The SER protocol uses RDMA to shift data movement tasks to the network adapter and thus frees up CPU cycles that would otherwise be consumed executing traditional TCP and iSCSI protocols Hence using RDMA based fast interconnects significantly increases data center application performance levels Figure 3 RDMA Acceleration 7
Download Pdf Manuals
Related Search
Related Contents
Digitus DN-19 ORG-42U rack accessory 取扱説明書 (PDF 3.2MB) EMAIL MARKETING Fujitsu 300GB 2.5" 10K SAS 6Gb/s 8MB EM-375KGU-FA,465KGU-FA,475KGU-FA,565KGU-FA Velodyne Acoustics DLS-4000 User's Manual SteppIR 総合取扱説明書 O2 REPLACER O2 REPLACER 取扱説明書 グロム MSX125 Texte intégral PDF (197 ko) Format convert software user manual V3.0 Copyright © All rights reserved.
Failed to retrieve file