Home
Reference Guide
Contents
1. anama Bl a otv Ba 12 3 2 Seamless OpenStack Integration 12 4 lan an le la all 14 AA ardware hnegdilremeblSx se toten au ela im aaa sm RD da t c Dow pint nk 14 42 Soliware Hequlrermellls ai cues 15 NM ar gt ee e e e Dei Ye Ness kil A Ee astu 15 4 4 OpenStack Software 15 do Edo e ge e 16 5 iSetung UD the NGIWO uin a au bu a des ani ol mi oco citu 17 lt ti elle akla Phaedri oo dentia e led eet 17 o T ARCTIC a NebWOELIKuscnu doveva bus uc Vb eds haa bh earch ale ala an ua odes aus o 17 5 1 2 Creating a Para Virtualized vNIC 18 5 1 3 Creating am SB IOV INSTANCE e ene Eco e rated 20 om EC ATIC a i e m 22 5 19 Attacha ula aman 23 92 Sisi m a im ll uoto ilk lila alay elimi 24 71 N gele YA A ee 24 o2 Qonnecdvily GHEGCK A MM E 24 5 2 0 lt Vel me ted
2. There are many options in terms of adapters cables and switches See www mellanox com for additional options Figure 7 Mellanox MCX314A BCBT ConnectX 3 40GbE FDR 56Gb s InfiniBand Adapter 14 Mellanox Technologies Confidential Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4 0 Rev 1 1 Figure 8 Mellanox SX1036 36x 40GbE SX6036 36x FDR 56Gb s InfiniBand Figure 9 Mellanox 40GbE FDR 56Gb s InfiniBand QSFP Copper Cable 7 4 2 Software Requirements e Red Hat Enterpise Linux OpenStack Platform 4 0 switch includes Red Hat OpenStack 3 or later e Red Hat Enterpise Linux 6 4 or later The specific release is determined by the Red Hat OpenStack version e Mellanox OFED 2 0 3 SR IOV support 4 3 Prerequisites 1 The basic setup is physically connected e In order to reduce the number of ports in the network two different networks be mapped to the same physical interface on two different VLANs 2 Mellanox OFED 2 0 3 SR IOV enabled is installed on each of the network adapters For Mellanox OFED installation refer to OFED User Manual Installation chapter http www mellanox com page products_dyn product_family 26 Visit Mellanox Community for verification options and adaptation http community mellanox com docs DOC 1317 4 4 OpenStack Software installation For Mellanox OpenStack installation follow the Mellanox OpenStack wiki pag
3. Nt 7 Mellanox Technologies 1 1 Setting Up the Network Figure 13 OpenStack Dashboard Volumes Figure 14 OpenStack Dashboard Create Volumes Create Volume Description Volume Quotas Total Gigabytes Go Number of Volumes 17 Figure 15 OpenStack Dashboard Volumes 5 1 5 Attach a Volume Attach a volume to the desired instance 22 Mellanox Technologies Confidential Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4 0 Rev 1 1 Figure 16 OpenStack Dashboard Manage Volume Attachments Manage Volume Attachments Attachments Instance Device Actions No items to display Displaying 0 tems Attach To Instance Attach to Instance Device Name Attach Volume 5 2 Verification Examples 5 2 1 Instances Overview Use the Dashboard to view all configured instances Figure 17 VM Overview F L ewenm 5 2 2 Connectivity Check There are many options for checking connectivity between difference instances one of which is simply to open a remote console and ping the required host To launch a remote console for a specific instance select the Console tab and launch the console 23 Mellanox Technologies 1 1 Setting Up the Network Figure 18 Remote Console Connectivity watance Coren 5 2 3 Volume Check To verify that the created volume is attached to a specific instance click the Volumes tab Figure
4. OF THE POSSIBILITY OF SUCH DAMAGE Mellanox TECHNOLOGIES Mellanox Technologies Mellanox Technologies Ltd 350 Oakmead Parkway Suite 100 Beit Mellanox Sunnyvale CA 94085 PO Box 586 Yokneam 20692 U S A Israel www mellanox com www mellanox com Tel 408 970 3400 Tel 972 0 74 723 7200 972 0 4 959 3245 Copyright 2014 Technologies Rights Reserved Mellanox logo BridgeX ConnectX Connect IB CORE Direct InfiniBridge InfiniHost InfiniScale MetroXQ MLNX OS PhyX ScalableHPC SwitchX UFM Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies Ltd ExtendX FabricIT Mellanox Open Ethernet Mellanox Virtual Modular Switch MetroDX Unbreakable Link trademarks of Mellanox Technologies Ltd All other trademarks are property of their respective owners 2 Document Number 15 954 Mellanox Technologies Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4 0 Rev 1 1 Contents 1 iie alna r R 7 Storage Acceleration Using Mellanox 9 Network Virtualization on ConnectX 3 eere enn 11 31 Measurements
5. overhead 15 can provide 6x faster performance than traditional TCP IP based iSCSI This also consolidates the efforts of both Ethernet and InfiniBand communities and reduces the number of storage protocols a user must learn and maintain Wee e Mellanox Technologies Rev 1 1 Storage Acceleration Using Mellanox Interconnect bypass allows the data path to effectively skip to the front of the line Data is provided directly to the application immediately upon receipt without being subject to various delays due to CPU load dependent software queues This has three effects e There is no waiting which means that the latency of transactions is incredibly low e Because there is no contention for resources the latency 15 consistent which 15 essential for offering end users with a guaranteed SLA e By bypassing the OS using results significant savings in CPU cycles With a more efficient system in place those saved CPU cycles can be used to accelerate application performance In the following diagram it is clear that by performing hardware offload of the data transfers using the 15 protocol the full capacity of the link is utilized to the maximum of the PCIe limit To summarize network performance is a significant element in the overall delivery of data center services To produce the maximum performance for data center services requires fast interconnects Unfortunately the high CPU overhea
6. 19 OpenStack Dashboard Volumes Additionally run the fdisk command from the instance console to see the volume details Figure 20 OpenStack Dashboard Console instance Console 24 Mellanox Technologies Confidential
7. 5696 status DOWN tenant 679545116c1e440ladcafa0857aefel2e n cues der pu cuis 20 Mellanox Technologies Confidential Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4 0 Rev 1 1 2 Use the command nova boot to launch an instance with the created port attached Snova boot flavor ml small image rh6 4p nic port 1d a43d35f3 3870 4ael 9a9d d2d341b693d6 vm3 EUCH ae ee eee EE Property T OS EXT STS task state image OS EXT STS vm state OS EXT SRY ATTRIlnNnsStance name OS SRV USG launched at flavor id security groups user id OS DCF diskConfig accessIPv4 accessIPv6 progress OS EXT STS power state OS EXT AZ availability zone config drive status updated hostld OS EXT SRV ATTR host OS SRV USG terminated at key name OS EXT SRV ATTR hypervisor hostname name adminPass tenant created os extended volumes volumes attached metadata scheduling rho 4p building instance 00000042 None ml small l61da6a9 6508 4e23 9f6f 881383461ab4 u default b94edf2504c84223b558e254314528902 MANUAL u name BUILD ZU IJO TIZ ISTUITS2 2422 None None None None vm3 tiTE37tQrNBn 679545ff6cle4401adcafa0857aefe2e 2013 12 19 07 32 4127 5 1 4 Creating Volume Create a volume using the Volumes tab on the OpenStack dashboard Click the Create Volume button
8. AMN Mellanox TECHNOLOGIES Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4 0 Rev 1 1 March 2014 www mellanox com Mellanox Technologies NOTE THIS HARDWARE SOFTWARE OR TEST SUITE PRODUCT PRODUCT S AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS IS WITH ALL FAULTS OF ANY KIND AND SOLELY FOR THE PURPOSE OF AIDING THE CUSTOMER IN TESTING APPLICATIONS THAT USE THE PRODUCTS IN DESIGNATED SOLUTIONS THE CUSTOMER S MANUFACTURING TEST ENVIRONMENT HAS NOT MET THE STANDARDS SET BY MELLANOX TECHNOLOGIES TO FULLY QUALIFY THE PRODUCT S AND OR THE SYSTEM USING IT THEREFORE MELLANOX TECHNOLOGIES CANNOT AND DOES NOT GUARANTEE OR WARRANT THAT THE PRODUCTS WILL OPERATE WITH THE HIGHEST QUALITY ANY EXPRESS OR IMPLIED WARRANTIES INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY FITNESS FOR A PARTICULAR PURPOSE AND NON INFRINGEMENT ARE DISCLAIMED IN NO EVENT SHALL MELLANOX BE LIABLE TO CUSTOMER OR ANY THIRD PARTIES FOR ANY DIRECT INDIRECT SPECIAL EXEMPLARY OR CONSEQUENTIAL DAMAGES OF ANY KIND INCLUDING BUT NOT LIMITED TO PAYMENT FOR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES LOSS OF USE DATA OR PROFITS OR BUSINESS INTERRUPTION HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY WHETHER IN CONTRACT STRICT LIABILITY OR TORT INCLUDING NEGLIGENCE OR OTHERWISE ARISING IN ANY WAY FROM THE USE OF THE PRODUCT S AND RELATED DOCUMENTATION EVEN IF ADVISED
9. Networking Neutron and Block Storage Cinder provisioning APIs e Provides tenant and application security isolation end to end hardware based traffic isolation and security filtering Figure 1 Mellanox OpenStack Architecture Controller Nodes OpenStack Management Services Medanor Gzantum Plugin Storage Servers Compute Servers R oe OpenStack Cinder C DEJE Converged T Openiscisi Service Storage Network Quantum With GER ar 40GbE FDR Adapter GEO Ayem IRDMA Local Diska i Management Mabsork Storage Network ite d Pihl M hw rk DHCP Agent Agon WEN J Mellanox Technologies Confidential Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4 0 Rev 1 1 2 Storage Acceleration Using Mellanox Interconnect Data centers rely on communication between compute and storage nodes as compute servers read and write data from the storage servers constantly In order to maximize the server s application performance communication between the compute and storage nodes must have the lowest possible latency highest possible bandwidth and lowest CPU utilization Figure 2 OpenStack Based IaaS Cloud POD Deployment Example Storage Servers VM VM VM c Ch OpenStack Cinder 50 Cache A
10. and create integration challenges for customers when they combine parts from different vendors Traditional offerings suggest deploying multiple network and storage adapters to run management storage services and tenant networks These also require multiple switches cabling and management infrastructure which increases both up front and maintenance costs Other more advanced offerings provide a unified adapter and first level Top of Rack ToR switch but still run multiple and independent core fabrics Such offerings tend to suffer from low throughput because they do not provide the aggregate capacity required at the edge or in the core and because they deliver poor application performance due to network congestion and lack of proper traffic isolation Several open source cloud operating system initiatives have been introduced to the market but none has gained sufficient momentum to succeed Recently OpenStack has managed to establish itself as the leading open source cloud operating system with wide support from major system vendors OS vendors and service providers OpenStack allows central management and provisioning of compute networking and storage resources with integration and adaptation layers allowing vendors and or users to provide their own plug ins and enhancements Red Hat built an industry leading certification program for their OpenStack platform By achieving this technology certification vendors can assure custome
11. atform 4 0 Rev 1 1 Figure 6 Network Virtualization p F j Servers TY g 3 VM ta Pma virtual i 4 Embedded TOMOGDE mi Switch Controller We os 7 Mellanox Technologies Rev 1 1 Setup and Installation 4 Setup and Installation The following setup is suggested for small scale applications The OpenStack environment should be installed according to the Red Hat OpenStack Enterprise Linux installation guide Refer to https access redhat com site documentation Red_Hat_OpenStack In addition the following installation changes should be applied e Acontroller node running the Networking neutron service should be installed with the Mellanox neutron plugin e Cinder patch should be applied to the storage servers for iSER support e Mellanox Neutron agent eSwitch daemon and Nova patches should be installed on the compute notes 4 1 Hardware Requirements e Mellanox ConnectX 3 or ConnectX 3 Pro 10 40GbE and FDR 56Gb s InfiniBand adapter cards e OGDbE or 40GbE Ethernet or FDR 56Gb s InfiniBand switches e Cables required for the ConnectX 3 card typically using SFP connectors for IOGbE or QSFP connectors for 40GbE and FDR 56Gb s InfiniBand e Server nodes should comply with Red Hat Enterprise Linux OpenStack Platform 4 0 requirements e Compute nodes should have SR IOV capability BIOS SR IOV is supported by RHEL 6 4 and above
12. ce a ala dientes lala iia he es al anti 25 1 Mellanox Technologies 1 1 Solution Overview List of Figures Figure 1 Mellanox OpenStack 8 Figure 2 OpenStack Based laaS Cloud POD Deployment Example 9 Figure 3 ROMA Acceleration e dedic ie mn asidine elmali ma 10 Figure dos Are iLe CUES so uvae Ue rec Uo 11 Figure 5 Latency Compas ON usui cuti bain Cot e ted cuti Cuba 12 Figure 6 NetWork rn 13 Figure 7 Mellanox MCX314A BCBT ConnectX 3 40GbE FDR 56Gb s InfiniBand Adapter 14 Figure 8 Mellanox SX1036 36x 40GbE 5 6036 36x FDR 56Gb s InfiniBand 15 Figure 9 Mellanox 40GbE FDR 56Gb s InfiniBand QSFP Copper Cable 15 Figure 10 OpenStack Dashboard 18 Figure 11 OpenStack Dashboard Launch 19 Figure 12 OpenStack Dashboard Launch Interface Select Network 19 Figure 13 OpenStack Dashboard 23 Figure 14 OpenStack Dashboard Create 23 Figure 15 OpenStack Da
13. d associated with traditional storage adapters prevents taking full advantage of high speed interconnects Many more CPU cycles are needed to process TCP and iSCSI operations compared to that required by the ROMA SER protocol performed by the network adapter Hence using RDMA based fast interconnects significantly increases data center performance levels Figure 3 RDMA Acceleration ZU o j PCIe Limit Hi inin SER Phsical Write SER 4 VMs Write _ Write 6X 2000 p j 16 VMs Write iSCS Write 8 vms ww 5CS Write 16 VMs 1 2 4 a 1 32 178 256 Size KB _ 7 Mellanox Technologies Confidential Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4 0 Rev 1 1 3 Network Virtualization on ConnectX 3 Adapters Single Root IO Virtualization SR IOV allows a physical PCIe device to present itself as multiple devices on the PCIe bus This technology enables a single adapter to provide multiple virtual instances of the device with separate resources Mellanox ConnectX 3 adapters are capable of exposing 127 virtual instances called Virtual Functions VFs These virtual functions can then be provisioned separately Each VF can be viewed as an additional device associated with the Physical Function It shares the same resources with the Physical Function and its number of ports equals those
14. dapter Local Disks 1 4 er me E Si si ii 2 Switching Fabric Storage applications that use iSCSI over TCP are processed by the CPU This causes data center applications that rely heavily on storage communication to suffer from reduced CPU utilization as the CPU 15 busy sending data to the storage servers The data path for protocols such as TCP UDP NFS and iSCSI all must wait in line with the other applications and system processes for their turn using the CPU This not only slows down the network but also uses system resources that could otherwise have been used for executing applications faster Mellanox OpenStack solution extends the Cinder project by adding iSCSI running over iSER Leveraging ROMA Mellanox OpenStack delivers 5X better data throughput for example increasing from 1GB s to 5GB s and requires up to 80 less CPU utilization see Figure 3 Mellanox ConnectX 3 adapters bypass the operating system and CPU by using RDMA allowing much more efficient data movement paths iSER capabilities are used to accelerate hypervisor traffic including storage access VM migration and data and VM replication The use of RDMA moves data to the Mellanox ConnectX 3 hardware which provides zero copy message transfers for SCSI packets to the application producing significantly faster performance lower network latency lower access time and lower CPU
15. e vlan provider physical network default provider segmentation id 4 shared False status ACTIVE subnets tenant id 679545ff6cle4401adcafa0857aefe2e a Di m VC A Mellanox Technologies 1 1 Setting Up the Network Sneutron subnet create net example 192 168 199 0 24 Created a new subnet allocation pools Qi start r 192 168 199 2 end 192 168 199 254 cidr 192 168 199 0 24 dns nameservers enable dhcp True gateway 1p 192 168 199 1 host routes id 3c9fflae 218d 4020 b065 a2991d238pb72 ip version 4 name network id 16079086 4f5a 4739 a190 7598 3310696 tenant id 679545ff6cle4401adcafa0857aefe2e 5 1 2 Creating a Para Virtualized vNIC Instance 1 Using the OpenStack Dashboard launch an instance VM using the Launch Instance button 2 Insert all the required parameters and click Launch This operation will create a macvtap interface on top of a Virtual Function VF Figure 10 OpenStack Dashboard Instances 0 Mellanox Technologies Confidential Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4 0 Rev 1 1 Figure 11 OpenStack Dashboard Launch Instance Launch Instance nstance Source G oec hy the detads for launching an mstance The chart below shows the resources used by this project n relation to ihe project quotas Image Flavor Details Name mit instance Nama VCPUs 1 Root Disk 0 GB Flavor Ep
16. es e Neutron https wiki openstack org wiki Mellanox Neutron e Cinder https wiki openstack org wiki Mellanox Cinder Nf 7 Mellanox Technologies Rev 1 1 Setup and Installation For the eSwitch daemon installation follow the OpenStack wiki pages part of Mellanox Neutron 4 5 Troubleshooting Troubleshooting actions for OpenStack installation with Mellanox plugins be found at http community mellanox com docs DOC 1127 _ 7 Mellanox Technologies Confidential Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4 0 Rev 1 1 5 Setting Up the Network 5 1 Configuration Examples Once the installation is completed it is time to set up the network Setting up a network consists of the following steps 1 Creating a network 2 Creating a VM instance Two types of instances can be created a Para virtualized vNIC b SR IOV direct path connection 3 Creating a disk volume 4 Binding the disk volume to the instance that was just created 5 1 1 Creating a Network Use the commands neutron net create and neutron subnet create to create a new network and a subnet net example in the example Sneutron net create net example Created a new network 4 Field Value Gaye eta eae Ser eal ae admin state up id 160790 6 4 5 4739 190 7598 3310696 name net example provider network typ
17. hemeral Disk 0 GB Total Disk GA RAM 512 ME Loum Project Quotas Number of instances 0 pe Number of VCPUs 0 Total RAM 0 M5 3 Select the desired network for the vNIC net3 in the example Figure 12 OpenStack Dashboard Launch Interface Select Network Launch Instance Details Access amp Security Networking Volume Options Post Creation Selected Networks Choose network from Available networks to Selected Networks by push button or drag and drop you may ss MOLS csrisrcss cossurre anos nT order drag and drop as well Available networks Met 7808511471452 Cancel Launch e Mellanox Technologies 1 1 Setting Up the Network 5 1 3 Creating an SR IOV Instance 1 Use the command neutron port create for the selected network net3 in the example to create a port with vnic_type hostdev Sneutron port create net example binding profile type dict vnic type hostdev Created a new port admin state up binding capabilities port filter false DindingiNost id binding profile physical network default binding vif type hostdev device id device owner fixed ips Subnet id 3c9fflae 218d 4020 b065 a2991d23bb72 Ip address L92 168 1992 id 434 35 3 3870 4 1 9 9 24341669346 mac address fa 16 3e 67 ad ef name network id 15 9790 6 4 5 47139 2190 75985331
18. irtualization ConnectX 3 Adapters QoS The eSwitch supports traffic class management priority mapping rate limiting scheduling In addition DCBX control plane can set Priority Flow Control PFC and FC parameters on the physical port e Monitoring Port counters are supported 3 1 Performance Measurements Many data center applications require lower latency network performance Some applications require latency stability as well Using regular TCP connectivity between VMs can create high latency and unpredictable delay behavior Figure 5 shows the dramatic difference 20X when using para virtualized vNIC running a TCP stream compared to SR IOV connectivity running Due to the direct connection of the SR IOV and the ConnectX 3 hardware capabilities there is a significant reduction in software interference that adds unpredictable delay to the packet processing Figure 5 Latency Comparison Latency us j Message Size Bytes 3 2 Seamless OpenStack Integration The eSwitch configuration is transparent to the Red Hat Enterprise Linux OpenStack Platform 4 0 administrator The installed eSwitch daemon on the server is responsible for hiding the low level configuration The administrator will use the OpenStack dashboard for the fabric management Mellanox Technologies Confidential Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Pl
19. of the Physical Function SR IOV is commonly used in conjunction with an SR IOV enabled hypervisor to provide virtual machines with direct hardware access to network resources thereby improving performance Mellanox ConnectX 3 adapters equipped with onboard embedded switch eSwitch are capable of performing layer 2 switching for the different VMs running on the server Using the eSwitch will gain higher performance levels in addition to security and QoS Figure 4 eSwitch Architecture Virtual NICs vNICs i f s TE n U n j vPorts with multi level QoS 11 irr t BS VLAN and Priority tagging Security Filters ACLs ig e CU num J E HW Based Ethernet Switching NIC HCA pPort QoS DCB Physical Ports eSwitch main capabilities and characteristics e Virtual switching creating multiple logical virtualized networks eSwitch offload engines handle all networking operations up to the VM thereby dramatically reducing software overheads and costs e Performance The switching is handled in hardware as opposed to other applications that use a software based switch This enhances performance by reducing CPU overhead e Security The eSwitch enables network isolation using VLANs and anti MAC spoofing 11 Mellanox Technologies 1 1 Network V
20. rs that their solutions have been validated with Red Hat OpenStack technology Mellanox is the first InfiniBand and Ethernet adapter vendor to be listed on the Red Hat Certified Solution Marketplace a directory of technologies and products which have been certified by Red Hat and are designed to optimize all offerings that include Red Hat OpenStack Mellanox is listed the Red Hat marketplace as a certified Hardware partner for Networking Neutron and Block Storage Cinder services This ensures that Mellanox ConnectX 3 InfiniBand and Ethernet adapter was tested certified and now supported with Red Hat OpenStack technology Red Hat Enterprise Linux OpenStack Platform 4 0 delivers an integrated foundation to create deploy and scale a secure and reliable public or private OpenStack cloud Red Hat Enterprise OpenStack Platform 4 0 combines the world s leading enterprise Linux and the fastest growing cloud infrastructure to give you the agility to scale and quickly meet customer demands without compromising on availability security or performance Mellanox Technologies offers seamless integration between InfiniBand and Ethernet interconnect and OpenStack services and provides unique functionality that includes application and storage acceleration network provisioning automation hardware based security and isolation Furthermore using Mellanox interconnect products allows cloud providers to save significant capital and operational expenses
21. shboard VOIUMES ss alsada a oce exa suc lcs s msi anil 23 Figure 16 OpenStack Dashboard Manage Volume Attachmenis 24 e MA e 24 Figure 18 Remote Console nnns nnn nnns 25 Figure 19 OpenStack Dashboard 25 Fig re 20 Openotack Dashboard e sr ikiyi an 25 A A Mellanox Technologies Confidential Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4 0 Rev 1 1 Preface About this Document This reference design presents the value of using Mellanox interconnect products and describes how to integrate Red Hat OpenStack technology with the end to end interconnect solution Audience This reference design is intended for server and network administrators The reader must have experience with the basic OpenStack framework and installation References For additional information see the following documents Table 1 Related Documentation Reference Location Red Hat Enterprise Linux OpenStack https access redhat com products Cloud OpenStack Platform 4 0 Mellanox OFED User Manual www mellanox com gt Products gt Adapter IB VPI SW gt Linux SW Drivers http www mellanox com content pages php pg products dy n amp product_family 26 amp menu_sec
22. through network and I O consolidation and by increasing the number of virtual machines VMs per server Mellanox provides a variety of network interface cards NICs supporting one or two ports of 10GbE 40GbE 56Gb s InfiniBand These adapters simultaneously run management 1 Mellanox Technologies 1 1 Solution Overview network storage messaging and clustering traffic Furthermore these adapters create virtual domains within the network that deliver hardware based isolation and prevent cross domain traffic interference In addition Mellanox Virtual Protocol Interconnect amp VPI switches deliver the industry s most cost effective and highest capacity InfiniBand or Ethernet switches supporting up to 36 ports of 56Gb s When deploying large scale high density infrastructures leveraging Mellanox converged network VPI solutions translates into fewer switching elements far fewer optical cables and simpler network design Mellanox integration with OpenStack provides the following benefits e Cost effective and scalable infrastructure that consolidates the network and storage to highly efficient flat fabric increases the VM density normalizes the storage infrastructure and linearly scales to thousands of nodes e Delivers the best application performance with hardware based acceleration for messaging network traffic and storage e Easy to manage via standard APIs Native integration with OpenStack
23. tion 34 Mellanox software source packages https github com mellanox openstack Mellanox OpenStack wiki page https wiki openstack org wiki Mellanox OpenStack Mellanox Ethernet Switch Systems http www mellanox com related docs user_manuals SX10X User Manual X User Manual pdf Mellanox Ethernet adapter cards http www mellanox com page ethernet_cards_overview Solutions space on Mellanox http community mellanox com community support solutions community OpenStack RPM package http community mellanox com docs DOC 1187 1 Mellanox Technologies 1 1 Solution Overview Reference Location Mellanox eS witchd Installation for http community mellanox com docs DOC 1 126 OpenFlow and OpenStack Troubleshooting http community mellanox com docs DOC 1 127 Mellanox OFED Driver Installation http community mellanox com docs DOC 1317 and Configuration for SR IOV Revision History Table 2 Document Revision History Revision Date Gam March 2014 Aligned document to Havana release Ce Mellanox Technologies Confidential Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4 0 Rev 1 1 1 Solution Overview Deploying and maintaining a private or public cloud is a complex task with various vendors developing tools to address the different aspects of the cloud infrastructure management automation and security These tools tend to be expensive
Download Pdf Manuals
Related Search
Related Contents
Tuyau d`arrosage extensible Festool Router PAC574354 User's Manual Sync 4.0 Owner`s Manual Šablona -- Diplomová práce (ft) - ramonage Philips 35000000614 Copyright © All rights reserved.
Failed to retrieve file