Home

Mellanox OpenStack and SDN/OpenFlow Solution Reference

image

Contents

1. Create Volume 24 Mellanox Technologies Confidential Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 Figure 21 OpenStack Dashboard Volumes Volumes Volumes Create Volume Dr Name Description Size States lype Attached To Actions 5 1 5 Attach a Volume Attach a volume to the desired instance The device name should be dev dv lt letter gt E g dev vdc Figure 22 OpenStack Dashboard Manage Volume Attachments Manage Volume Attachments Attachments Instance Device Actions Mo items to display Displaying D tems Attach To Instance Attach to Instance Device Name ym 55da5294 3109 4214 a056 6500e45f7d94 e devivdc Attach Volume 25 Mellanox Technologies Rev 1 2 Setting Up the Network 5 2 Verification Examples 5 2 1 Instances Overview Use the OpenStack Dashboard to view all configured instances Figure 23 VM Overview ER All Instances openstack Instances Project Had Hama i Lands Clie a dunk Tah Peer St Actos 5 2 2 Connectivity Check There are many options for checking connectivity between difference instances one of which is simply to open a remote console and ping the required host To launch a remote console for a specific instance select the Console tab and launch the console Figure 24 Remote Console Connectivity D Instance Detail vm1 openstack Instance Consola 5 2 3 Volume Check To verify
2. e Performance The switching is handled in hardware as opposed to other applications that use a software based switch This enhances performance by reducing CPU overhead 14 Mellanox Technologies Confidential Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 e Security The eSwitch enables network isolation using VLANs and anti MAC spoofing In addition by using OpenFlow ACLs the eSwitch can be configured to filter undesired network flows e QoS The eSwitch supports traffic class management priority mapping rate limiting scheduling and shaping configured via OpenFlow In addition DCBX control plane can set Priority Flow Control PFC and FC parameters on the physical port e Monitoring Port counters are supported 3 1 Performance Measurements Many data center applications benefits from low latency network communication while others require deterministic latency Using regular TCP connectivity between VMs can create high latency and unpredictable delay behavior Figure 6 shows the dramatic difference 20X improvement delivered by SR IOV connectivity running RDMA compared to para virtualized vNIC running a TCP stream Using the direct connection of the SR IOV and the ConnectX 3 hardware eliminates the software processing that adds an unpredictable delay to packet data movement The result is a consistently low latency that allows application software to rely on deterministic packet t
3. allows a single physical PCIe device to present itself as multiple devices on the PCIe bus Mellanox ConnectX 9 3 adapters are capable of exposing up to 127 virtual instances called Virtual Functions VFs These virtual functions can then be provisioned separately Each VF can be viewed as an additional device associated with the Physical Function It shares the same resources with the Physical Function and its number of ports equals those of the Physical Function SR IOV is commonly used in conjunction with an SR IOV enabled hypervisor to provide virtual machines with direct hardware access to network resources thereby improving performance Mellanox ConnectX 3 adapters equipped with onboard embedded switch eSwitch are capable of performing layer 2 switching for the different VMs running on the server Using the eSwitch will gain even higher performance levels and in addition improve security isolation and QoS Figure 5 eSwitch Architecture Virtual NICs vNICs vPorts with multi level QoS VLAN and Priority tagging Security Filters ACLs Embedded Switch HW Based Ethernet Switching NIC HCA lt pPort QoS and DCB Physical Ports pPort eSwitch main capabilities and characteristics e Virtual switching creating multiple logical virtualized networks The eSwitch offload engines handle all networking operations up to the VM thereby dramatically reducing software overheads and costs
4. 00 00 00 de ad be ef name set egress queue 5 cookie O priority 32768 src mac 00 11 22 33 44 55 active true actions enqueue 0 5 F http 172 30 49 68 8080 wm staticflowentrypusher json 28 Mellanox Technologies Confidential
5. Drop SSH Traffic from a Given Source IP Address 27 e ISBPOOSEdress CUCU EE 28 3 Mellanox Technologies Rev 1 2 Solution Overview List of Figures Figure 1 Mellanox OpenStack Architecture ssmmnssssssrrressrrrrresrrrrreorrrrrrrrrrrrrrnrrrrr rr rr rr rr nr RR KR rr KRK RAR RR KKR nn nan 8 Figure 2 ObenbloWJATChIte llle utu EEUU IS dba IUe co suede ge bula eesti lee Bas le eae ie ees 11 Figure 3 OpenStack Based laaS Cloud POD Deployment Example n 0nnnnnn0nnnnnnnennnnnnrnnnresennnnnenenne 12 Figure 4 nDMA ACCOIerallot EE 13 FIGURES ee le 14 Figure 6 Latency Comparison sruseesessrrrrersrrrrressrrrrrerrrrrrrrrrr enes rr rr ror arr rr nr KRK RAR RR KRA RR RR KKR ARR RR KRA RR RR nnn risen nnns 15 Figure eh ele ie RE le 16 Foure 6705 Test CC 16 Figure I NetWork er EK de Ip EE 17 Figure 10 Mellanox MCX314A BCBT ConnectX 3 40GbE Adapter ssmnsssrressrresssrresrsreorrrrorrrrenrnrrrnrr rna 18 Foure Ti MellanoxsSX 1035 e LE le it sat ce sidled NR 19 Figure 12 Mellanox 40GbE QSFP Copper Cable ccccccccsssssssssseeeeeceeeeeeeeseeeseaeeeeeeaeeeeeeeeeeeeeeeeess 19 Figure 13 Quantum net create subnet create Commande eese 21 Figure 14 OpenStack Dashboard Instances smnmsmssssrsssrrrrresrrrrrrrrrrrrrrnrrr enes rr rrr rr rr rr rens RR RR ARR nnns 22 Figure 15 OpenStack Dashboard Launch Instance ssmeessrrssessrrrrrrrerssrrrrrrrrrnnrrrr rens rnnr rer rr rr nn n rr RR rn nanna nn 22 Figure 16 OpenStack Dashboard Launch
6. Interface Select Network 23 Figure 17 Quantum port create Commande 23 Figure T6 Using tne nova boot Een un Le EE EE 24 Figure 19 OpenStack Dashboard Volumes 24 Figure 20 OpenStack Dashboard Create Volumes nssmrsssrrssessrrsrrrrerssrrrrrrrrrrnrrrr rens rrn rr rr rr rn nn rr RR Ar rna non nn 24 Figure 21 Openstack Dashboard VOIUMES uto e a ui E Lieu ose rr rr rr eva RR RR ARR codes RR RR 25 Figure 22 OpenStack Dashboard Manage Volume Attachments mnssmnesesssrrrsesrrrrresrrrrrrnrrrrrrnnrrrr rann rna 25 PIQUE 23 gt VIVILOVERVICW cm 26 Figure 24 Remote Console Connectivity s s ssssssrressrrrrrsesrrrrrenrrrrrerrrrrrrrnrrrrrr rr rr rr rn rr RK rer RR RR AR RR RK RAR RR KRA 26 Figure 25 Open Stack Dashboard Volumes senssa E 26 Figure 26 OpenStack Dashboard Console iei etn ee c ee eb Ue ees tus eue ive eid aet 27 CH AM NEN Mellanox Technologies Confidential Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 Preface About this Document This reference design presents the value of using Mellanox interconnect products and describes how to integrate the OpenStack OpenFlow solution with the end to end Mellanox interconnect products Audience This reference design is intended for server and network administrators The reader must have experience with the basic OpenStack framework and installation References For additional information see the following documents Table 1 Related Doc
7. data center networks SDN architecture separates the control plane from the data plane in data center switches and hosts With SDN network control is implemented in software and can be executed from a server which reduces network complexity and provides a common interface as an alternative to the proprietary and expensive options from traditional vendors At the basis of the SDN approach is the decoupling of the system that makes decisions as to where traffic is sent the control plane from the underlying system that forwards traffic to the selected destination the data plane This enables network architects programmatically deciding how traffic flows throughout the network and centralizing this logic into a programmable interface that can be extended and tailored to customer needs SDN Approach Benefits e Efficient and flexible networks tailored optimization e Quick time to market of new services e Cost savings on hardware simpler forwarding devices required e Ability to test and implement new routing protocols quickly Mellanox Technologies has been implementing these concepts for over 10 years in its InfiniBand products providing existing data centers with a mature infrastructure for flexible scalable and dynamic networks Today Mellanox takes the extensive knowledge gained from building hundreds of high performance and scalable InfiniBand networks and provides SDN networks on Ethernet as well utilizing state of the art technologie
8. openstack org wiki Mellanox Quantum e Cinder https wiki openstack org wiki Mellanox Cinder For the eSwitch daemon installation follow the OpenStack wiki pages part of Mellanox Quantum e https wiki openstack org wiki Mellanox Quantum 4 6 OpenFlow Agent Installation The OpenFlow agent installation procedure is defined in the Mellanox Community http community mellanox com docs DOC 1126 20 Mellanox Technologies Confidential Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 5 Setting Up the Network 5 1 Configuration Examples Once the installation is completed it is time to set up the network Setting up a network consists of the following steps 1 Creating a network 2 Creating a VM instance Two types of instances can be created a Para virtualized vNIC b SR IOV direct path connection 3 Creating a disk volume 4 Binding the disk volume to the instance that was just created 5 1 1 Creating a Network Use the quantum net create and quantum subnet create commands to create a new network and a subnet net3 in the example Figure 13 Quantum net create subnet create Commands E root xena016 2 4 quantum net create nets Created a new network admin state up id provider network type provider physical network provider itation id router external shared subnets tenant id m a root
9. performance communication between the compute and storage nodes must have the lowest possible latency highest possible bandwidth and lowest CPU utilization Figure 3 OpenStack Based laaS Cloud POD Deployment Example Compute Servers Storage Servers VM VM VM orca OpenStack Cinder 17 13 73 Hypervisor KVM RDMA Cache Adapter Local Disks E Switching Fabric Storage applications rely on iSCSI over TCP communications protocol stack continuously interrupt the processor in order to perform basic data movement tasks packet sequence and reliability tests re ordering acknowledgements block level translations memory buffer copying etc This causes data center applications that rely heavily on storage communication to suffer from reduced CPU efficiency as the processor is busy sending data to and from the storage servers rather than performing application processing The data path for applications and system processes must wait in line with protocols such as TCP UDP NFS and iSCSI for their turn using the CPU This not only slows down the network but also uses system resources that could otherwise have been used for executing applications faster Mellanox OpenStack solution extends the Cinder project by adding iSCSI running over RDMA SER Leveraging ROMA Mellanox OpenStack delivers 6X better data throughput for example increasing from 1GB s to 5GB s and while simultaneously reducing CPU utilization by
10. that the created volume is attached to a specific instance click the Volumes tab Figure 25 OpenStack Dashboard Volumes Volumes Volumes sz DESCH Home Descripticn Size Situs Type Amnched To Actions 26 Mellanox Technologies Confidential Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 Additionally run the fdisk command from the instance console to see the volume details Figure 26 OpenStack Dashboard Console Instance Detail vm1 Instance Console f console is not responding to keyboard input click the grey status bar below Click here to show only console root 192 168 283 3 In fdisk l dev vda 2147 MB 2147483648 bytes Is ors track 4161 cylinders 512 516896 bytes i logical physical 512 bytes 7 512 bytes minimum optimal 512 bytes 7 512 bytes dentifier BxBBB48f c6 Start dE j 204898 82 ound dary 1891328 83 Linux dary 3 536878691286 bytes cylinders 516896 bytes ytes 512 bytes 4 by a root8192 168 283 3 In _ 5 3 OpenFlow Configuration Examples The following examples use the Floodlight 0 9 network controller REST API Note The flow must include a match on the destination MAC of the relevant VM s vNIC the MAC that is assigned to the VF used by the VM 5 3 1 Drop SSH Traffic from a Given Source IP Address The following example shows how to apply a block rule for SSH traffic TCP port 22 with source ip IP 192 168 100 1 destined to a Virtu
11. up to 80 see Figure 4 Mellanox ConnectX 3 adapters bypass the operating system and CPU by using RDMA allowing much more efficient data movement iSER capabilities are used to accelerate hypervisor traffic including storage access VM migration and data and VM replication The use of RDMA shifts data movement processing to the Mellanox ConnectX 3 hardware which provides zero copy message transfers for SCSI packets to the application producing significantly faster performance lower network latency lower access time and lower CPU overhead iSER can provide 6X faster performance than traditional TCP IP based iSCSI The ISER protocol unifies the software development efforts of both Ethernet and InfiniBand communities and reduces the number of storage protocols a user must learn and maintain KC ry Mellanox Technologies Confidential Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 RDMA bypass allows the application data path to effectively skip to the front of the line Data is provided directly to the application immediately upon receipt without being subject to various delays due to CPU load dependent software queues This has three effects e There is no waiting which means that the latency of transactions is incredibly low e Because there is no contention for resources the latency is deterministic which is essential for offering end users a guaranteed SLA e Bypassing the OS using RDMA re
12. xena016 2 quantum subnet create name usbnet net3 net Created a new subnet allocation pools start 192 168 203 2 end 192 168 2 cidr 192 1600 203 0 24 dns_nameservers enable dhcp True gateway 1p 192 168 203 1 host routes T 0896b22 3668 4fce bf8c a843001858d5 ip version i name network id tenant id Mellanox Technologies Rev 1 2 Setting Up the Network 5 1 2 Creating a Para Virtualized vNIC Instance 1 Using the OpenStack Dashboard launch an instance VM using the Launch Instance button 2 Insert all the required parameters and click Launch This operation will create a macvtap interface on top of a Virtual Function VF Figure 14 OpenStack Dashboard Instances e Instances openstack Instances terre Launch katanga Inst nce Hasse IP Adiw 5i Kejp TY Task Prayer State rklant nn Figure 15 OpenStack Dashboard Launch Instance Launch Instance Details Instance Source Specify the details for launching an instance mage sl The chart below shows the resources used by this project in relation to the project s quotas Image Flavor Details h6 3 EI Name m1 tiny Instance Name VCPUs 1 umi Root Disk 0 GB Flavor Ephemeral Disk 0 GB m1 tiny v Total Disk 0 GB RAM 512 MB Instance Count 1 Project Quotas Number of Instances 0 Number of VCPUs 0 om Total RAM 0 MB Cancel Launch 22 Mellanox Technologies Confidential
13. 3 1 Performance Measurements ctr etu rente an eh ute uS Ret etc a iE Resto Res RR RR a bv KKR RR KKR KRK nn 15 3 2 Quality of Service Considerations snmumsesssssrsesrrrrrersrrrrrenrrrrrrnrrrrrrrrr rr rr ens r rr rr nnne 15 39 Seamless gie lcu E 17 A SETUP ANG installaatio M ec Hr 18 AT BASIC SOU RN m 18 4 2 Hardware Requirements cccccscsccccsseecceceseceeceeeeeseeceeeeeseeeessaueeeeseueeeesseeeeeseeeeeessaeeeeesaees 18 43 olWware Tiegultemellls ec Discuti Eeer 19 AA E ee EE i diei dede UP uu beu indie dau lavi dd OT Ru Do pP E p den N DRE su bota Eu mo draco draw aT 19 4 5 OpenStack Software msiallatnon rann nns rr nns rr RKA nnns 19 4 6 OpenFlow Agent Installaton nennen RR RAR SRA RR RAR RR RAR Rn 20 5 Setting Up the NetWOEIK eege eege aaa S EE Ee 21 Ooty gei te le die e Rene 21 Ort e dl Te TEE 21 5 1 2 Creating a Para Virtualized vNIC Insiance cee ccceecneeeeeeeeeeeeeeeeeesaeeeeeeeesaaeeeess 22 5 1 3 Creating an SR IOV Instance sanien a A E 23 sm CREATING a lo I 24 0 15 Atacha e E 25 5 2 Verification Examples nnnannnsennesennnestrnrsrnrresrrrrsrnrrrsrrrrsrnrrnsnrrrstrrrnsnrrennntrnnnrrrntrnrenn reenn 26 521 instances OVENIEW E i ee eS ed ol a ts et ER 26 522 CONNEC IG NGC uns ed ses AE a 26 5 2 9 Volume CMEGK tege tito tpe e eo ERR ERE Em ELO EE 26 5 3 OpenFlow Configuration Examples so0000nnnnneenoeaannnnneosennnnnnnnossrnnnrnrrnnsennnrerrenssnnnrrrreennne 27 5 3 1
14. 3 adapter e Overlay networks support Mellanox customers are now able to benefit from the VXLAN and NVGRE scalability performance without compromising on network performance ConnectX 3 supports high performance IOGbE and 40GbE VxLAN and NVGRE ConnectX 3 Pro eliminates the VXLAN and NVGRE performance overheads It dramatically reduces CPU overhead up to 80 and enables IOGbE and 40GbE configuration with no throughput penalty For additional information see the ConnectX 3 Pro page on the Mellanox website e Partnerships Mellanox SDN is already enhancing partner solutions by eliminating traditional performance and scalability limitations OpenFlow OpenFlow protocol provides a standard API between the control plane and forwarding plane Connect X 3 incorporates an embedded switch eSwitch enabling VM communication to enjoy bare metal performance The ConnectX 3 driver includes OpenFlow agent software based on the Indigo2 open source project which enables controlling the eSwitch using standard OpenFlow protocol the current OpenFlow version supported is 1 0 Installing fabric flows on adapter eSwitches has great value and allows networks to scale naturally Each eSwitch 1s responsible only for a relatively few VMs only those VMs running on a specific host Therefore by distributing these switches on many adapters the scaling obstacle are eliminated This is unlinke the case of trying to implement scalability on centralized physical
15. AMN Mellanox TECHNOLOGIES Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 September 2013 www mellanox com Mellanox Technologies NOTE THIS HARDWARE SOFTWARE OR TEST SUITE PRODUCT PRODUCT S AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS IS WITH ALL FAULTS OF ANY KIND AND SOLELY FOR THE PURPOSE OF AIDING THE CUSTOMER IN TESTING APPLICATIONS THAT USE THE PRODUCTS IN DESIGNATED SOLUTIONS THE CUSTOMER S MANUFACTURING TEST ENVIRONMENT HAS NOT MET THE STANDARDS SET BY MELLANOX TECHNOLOGIES TO FULLY QUALIFY THE PRODUCT S AND OR THE SYSTEM USING IT THEREFORE MELLANOX TECHNOLOGIES CANNOT AND DOES NOT GUARANTEE OR WARRANT THAT THE PRODUCTS WILL OPERATE WITH THE HIGHEST QUALITY ANY EXPRESS OR IMPLIED WARRANTIES INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY FITNESS FOR A PARTICULAR PURPOSE AND NON INFRINGEMENT ARE DISCLAIMED IN NO EVENT SHALL MELLANOX BE LIABLE TO CUSTOMER OR ANY THIRD PARTIES FOR ANY DIRECT INDIRECT SPECIAL EXEMPLARY OR CONSEQUENTIAL DAMAGES OF ANY KIND INCLUDING BUT NOT LIMITED TO PAYMENT FOR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES LOSS OF USE DATA OR PROFITS OR BUSINESS INTERRUPTION HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY WHETHER IN CONTRACT STRICT LIABILITY OR TORT INCLUDING NEGLIGENCE OR OTHERWISE ARISING IN ANY WAY FROM THE USE OF THE PRODUCT S AND RELATED DOCUMENTATION EVEN IF ADVISED OF THE POS
16. Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 3 Select the desired network for the vNIC net3 in the example Figure 16 OpenStack Dashboard Launch Interface Select Network Launch Instance Networking selected Networks Choose network from Available networks to Selected Networks by push button or drag and drop you may ell nets change nic order by drag and drop as well Available networks v neti Cancel Launch 5 1 3 Creating an SR IOV Instance 1 Use the quantum port create command for the selected network net3 in the example to create a port with vnic_type hostdev Figure 17 Quantum port create Command root xena016 2 quantum port create net binding profile type dict vnic_type hostdev Created a new port admin state up binding capabilities port_filter false binding profile i physical network mlx1 j binding vif type hostdev device id device owner fixed ips subnet_id e0896b22 3668 4fce bf8c a843001858d5 ip address 192 168 203 5 id 099a1db7 8b0a 4a7e 8045 2fd4cff4c17f mac address fa 16 3e b2 f1 6f name network_id 97127c33 0095 4776 abb5 20420987009a status ACTIVE tenant_id 9511bf012f00467481022e3235b5786f 23 Mellanox Technologies Rev 1 2 Setting Up the Network 2 Use the nova boot command to launch an instance with the created port attached Figure 18 Using the nova boot Command root xena016 2
17. SIBILITY OF SUCH DAMAGE Mellanox TECHNOLOGIES Mellanox Technologies Mellanox Technologies Ltd 350 Oakmead Parkway Suite 100 Beit Mellanox Sunnyvale CA 94085 PO Box 586 Yokneam 20692 U S A Israel www mellanox com www mellanox com Tel 408 970 3400 Tel 972 0 74 723 7200 Eee A era Fax 972 0 4 959 3245 Copyright 2013 Mellanox Technologies All Rights Reserved Mellanox Mellanox logo BridgeX ConnectX CORE Direct InfiniBridge InfiniHost InfiniScale MLNX OS PhyX SwitchX UFM Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies Ltd Connect IB M FabricIT Mellanox Open Ethernet Mellanox Virtual Modular Switch MetroX MetroDX ScalableHPC Unbreakable Link are trademarks of Mellanox Technologies Ltd All other trademarks are property of their respective owners 2 Document Number Mellanox Technologies Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 Contents EE Dinge eT 7 Be WE ODER STACK m Arr 7 1 2 Software Defined Networking GIN 9 1 23 OPENF n 10 12 2 Supported Features oti e e o oe n odi eh ieee aes 11 Accelerating Storage asinina r aa a a aad iai 12 Network Virtualization on Mellanox Adapter s ccccccsssseecsseseeeeeeseeeeeenseeeeenseeeeeesseeeoenseesonenees 14
18. al Machine with vNIC MAC 52 54 00 12 83 8e The OF DPID is the identifier of the OpenFlow agent responsible for the HCA eSwitch Perform the following commands on the controller MAC 52 54 00 12 83 8e DST_PORT 22 OF_IP 172 30 40 171 OF_DPID 00 00 aa bb cc dd SRC_IP 192 168 100 1 curl d switch OF_DPID name BLOCK SSH sw1 cookie 0 priority 32768 dst mac MAC ether type 2048 srce ip SRC_IP in 21 Mellanox Technologies Rev 1 2 Setting Up the Network 5 3 2 protocol 6 dst port DST_PORT active true http 5 OF_IP 8080 wm staticflowentrypusher json status Entry pushed Examine the rules on the HCA by running ethtool u eth4 4 RX rings available Total I rules Filter 1 Rule Type TCP over IPv4 Src IP addr 192 168 100 1 mask 0 0 0 0 Dest IP addr 0 0 0 0 mask 255 255 255 255 TOS 0x0 mask Oxff Src port 0 mask Oxffff Dest port 22 mask 0x0 Dest MAC addr 52 54 00 12 83 8E mask FF FF FF FF FE FF Action Drop Verify the configuration Try connecting to the server from 192 168 100 1 via SSH the operation should be denied Try and ping the server from 192 168 100 1 the operation should succeed Set QoS Egress Queue The following example steers all traffic from the specific vNIC specified by source MAC address 00 11 22 33 44 55 to egress queue 5 curl d switch 00
19. erms of adapters cables and switches See www mellanox com for additional options Figure 10 Mellanox MCX314A BCBT ConnectX 3 40GbE Adapter CHE S5 Mellanox Technologies Confidential Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 Figure 11 Mellanox SX1036 36x40GbE Figure 12 Mellanox 40GbE QSFP Copper Cable 4 3 Software Requirements e Supported OS e RHEL 6 3 or higher e Mellanox OFED 2 0 3 SR IOV support or higher e KVM hypervisor complying with OpenStack requirements 4 4 Prerequisites 1 The basic setup is physically connected e In order to reduce the number of ports in the network two different networks can be mapped to the same physical interface on two different VLANs 2 Mellanox OFED 2 0 SR IOV enabled is installed on each of the network adapters e For Mellanox OFED installation refer to Mellanox OFED User Manual Installation chapter http www mellanox com page products_dyn product_family 26 e See Mellanox Community for verification options and adaptation http community mellanox com docs DOC 1317 3 The OpenStack packages are installed on all network elements 4 EPEL repository is enabled http fedoraproject org wiki EPEL 4 5 OpenStack Software installation For Mellanox OpenStack installation follow the Mellanox OpenStack wiki pages 19 Mellanox Technologies Rev 1 2 Setup and Installation e Quantum https wiki
20. he OpenStack or OpenFlow controller administrator The installed eSwitch daemon on the server is responsible for hiding the low level configuration The administrator will use the standard OpenStack dashboard APIs or OpenFlow controller REST interface for the fabric management Both OpenFlow agent and Quantum agent configures the eSwitch in the adapter card Figure 9 Network Virtualization OpenStack Manager Quantum Plug In Embedded OpenFlow Controller 10 40GbE Switch 17 Mellanox Technologies Rev 1 2 Setup and Installation 4 1 4 2 Setup and Installation Basic Setup The following setup is suggested for small scale applications The OpenStack environment should be installed according to the OpenStack installation guide In addition the following installation changes should be applied e A Quantum server should be installed with the Mellanox Quantum plugin e A Cinder patch should be applied to the storage servers for 1SER support e Mellanox Quantum agent eSwitch daemon and Nova patches should be installed on the compute notes Hardware Requirements e Mellanox ConnectX 3 adapter cards e OGDbE or 40GbE Ethernet switches e Cables required for the ConnectX 3 card typically using SFP connectors for IOGbE or QSFP connectors for 40GbE e Server nodes should comply with OpenStack requirements e Compute nodes should have SR IOV capability BIOS and OS support There are many options in t
21. intaining a private or public cloud is a complex task with various vendors developing tools to address the different aspects of the cloud infrastructure management automation and security These tools tend to be expensive and create integration challenges for customers when they combine parts from different vendors Traditional offerings suggest deploying multiple network and storage adapters to run management storage services and tenant networks These also require multiple switches cabling and management infrastructure which increases both up front and maintenance costs Other more advanced offerings provide a unified adapter and first level ToR switch but still run multiple and independent core fabrics Such offerings tend to suffer from low throughput because they do not provide the aggregate capacity required at the edge or in the core and because they deliver poor application performance due to network congestion and lack of proper traffic isolation Several open source cloud operating system initiatives have been introduced to the market but none has gained sufficient momentum to succeed Recently OpenStack has managed to establish itself as the leading open source cloud operating system with wide support from major system vendors OS vendors and service providers OpenStack allows central management and provisioning of compute networking and storage resources with integration and adaptation layers allowing vendors and or use
22. le infrastructure that consolidates the network and storage to a highly efficient flat fabric increases the VM density commoditizes the storage infrastructure and linearly scales to thousands of nodes e Delivers the best application performance with hardware based acceleration for messaging network traffic and storage e Easy to manage via standard APIs Native integration with OpenStack Quantum network and Cinder storage provisioning APIs e Provides tenant and application security isolation end to end hardware based traffic isolation and security filtering Figure 1 Mellanox OpenStack Architecture Controller Nodes OpenStack Management Services PI gr Storage Servers Compute Servers soen VM VM VM 1 Sates ISCSI ISER Target DEDE Converged ROMA Raia LSS eee gt Service Storage Network Quantum With iSER 40GbE FDR Adapter KEY Agent GELEE lt gt A d BEI LE LocaiDisks 5 vm A Mellanox E soos Adapter SC tt h ed 4 Management Network Network Nodes 4 Storage Network lt gt Service Network lt gt Public Network DHCP Agent Public Networ L3 Agent L2 Agent Adapter 44g CHE BE Mellanox Technologies Confidential Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 1 2 Software Defined Networking SDN Software Defined Networking SDN is emerging as an alternative to proprietary
23. nova boot flavor ml tiny image rh6 3 vm3 nic port id 099aldb7 8b0a 4a7e 8045 2fd4cff4c17Ff Value OS EXT STS task_ state image OS EXT STS vm state OS EXT SRV ATTR instance_name flavor id security groups user_id 0S DCF diskConfig accessIPv4 accessIPv6 progress OS EXT STS power_state OS EXT AZ availability zone config drive status updated hostId OS EXT SRV ATTR host key name O0S EXT SRV ATTR hypervisor_hostname name adminPass tenant_id created metadata scheduling rh6 3 building instance 00000029 ml tiny f2b0d3ec 64b8 473b 866a b29264F13219 u name u default 2573e2ffdef241c29c932cc76f01e583 MANUAL 9 9 None BUILD 2013 04 04T11 23 15Z None None None vm3 Uz3RPE3TvnJf 9511bf012f00467481022e3235b5786f 2013 04 04T11 23 15Z 5 1 4 Creating a Volume Create a volume using the Volumes tab on the OpenStack dashboard Click the Create Volume button Figure 19 OpenStack Dashboard Volumes B V Umes openstack Volumes e Name Deseription Site EERTE Typa Amacbhad T Actions admin Figure 20 OpenStack Dashboard Create Volumes Create Volume Volume Name Description Volumes are block devices that can be attached to Description Volume Quotas Total Gigabytes 0 GB Number of Volumes 0 Size GB
24. nt High bandwidth Low latency Messaging server Server Server The test results show the following 1 When prioritizing a stream green and using dual queues the low priority stream has a minor effect on the high priority stream 11 8usec compared to 10 8usec in Figure 8 2 Bandwidth increases when prioritizing streams 9350GbE as well as when increasing the number of queues 9187GbE compared to regular non QoS conditions 8934GbE 3 The latency difference is dramatically reduced when using QoS 11 8usec compared to 10548u sec Figure 8 QoS Test Results Converged Network Latency Bandwidth Gbps RTT Latency uS 9400 Messaging Storage 9350 100000 10548 3200 3187 10000 1000 300 3100 M L4 me 9000 100 8934 Messaging 37 E Seng 8900 Only Traffic 11 8 10 i a e 8800 1 8700 Shared Q Separate HW Q s No QoS Separate HW Q s with QoS Shared Q Separate HW Q s No QoS Separate HW Q s with QoS Results are based on 1OGbE adapter card Conclusion CHE p 2s55ENom Mellanox Technologies Confidential Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 The test results emphasize that consolidation 1s possible on the same physical port Applications that require low latency will not suffer from bandwidth consuming applications when using more than one queue and enabling QoS 3 3 Seamless Integration The eSwitch configuration is transparent to t
25. ransfer times Figure 6 Latency Comparison VM to VM same machine TCP PV VM to VM 2 machines TCP PV VM to VM same machine ROMA VM to VM 2 machines ROMA Physical to Physical ROMA Para virtual is non predictable 60 20X lower latency than a vNIC T Size fend i B i 3 2 Quality of Service Considerations The impact of using QoS and network isolation is tremendous The following example compares the various latency and bandwidth levels as a function of the QoS level The following test reveals the great advantage that can be achieved using the switch QoS capability Setup characteristics 15 Mellanox Technologies Rev 1 2 Network Virtualization on Mellanox Adapters Streams In this test two types of streams were injected e Blue Storage stream High bandwidth TCP stream Latency is not crucial for such an application e Green Messaging stream Low bandwidth TCP messaging stream Latency as measured by Round Robin test is crucial for such an application QoS levels The following QoS levels were tested e Single Queue both streams use the same ingress queue e Dual Queues with no QoS each stream has its own ingress queue while both queues have the same priority level e Dual Queues with QoS enabled each stream has its own ingress queue while the green stream is prioritized over the blue stream Figure 7 QoS Setup Example TCP Streams TCP RR Clie
26. rs to provide their own plug ins and enhancements Mellanox Technologies offers seamless integration between its products and OpenStack layers and provides unique functionality that includes application and storage acceleration network provisioning automation hardware based security and isolation Furthermore using Mellanox interconnect products allows cloud providers to save significant capital and operational expenses through network and I O consolidation and by increasing the number of virtual machines VMs per server Mellanox provides a variety of network interface cards NICs supporting one or two ports of IOGbE 40GbE or 56Gb s InfiniBand These adapters simultaneously run management network storage messaging and clustering traffic Furthermore these adapters create virtual domains within the network that deliver hardware based isolation and prevent cross domain traffic interference In addition Mellanox Virtual Protocol Interconnect VPI switches deliver the industry s most cost effective and highest capacity switches supporting up to 36 ports of 56Gb s When deploying large scale high density infrastructures leveraging Mellanox converged network VPI solutions translates into fewer switching elements far fewer optical cables and simpler network design 7 Mellanox Technologies Rev 1 2 Solution Overview Mellanox integration with OpenStack provides the following benefits e Cost effective and scalab
27. s such as OpenFlow enabled NICs and switches and an open architecture Mellanox s solution for SDN networks is built as an open industry standard platform which can deliver a wide range of Network Applications Mellanox integration with SDN OpenFlow provides the following benefits e Maximum Performance Mellanox embedded eSwitch technology on its 40GbE NICs together with an OpenFlow agent provides the scalability and performance required for SDN security solutions The eSwitch functionality enables hypervisor like functionality in hardware while connecting with SR IOV This allows the customer to benefit from both worlds policy enforcement via OpenFlow protocol and SR IOV accelerated performance mode A VM can access the network directly and execute the desired policy at near line rate performance See Network Virtualization on Mellanox Adapters in chapter 3 of this document for more information on eSwitch d 9 Mellanox Technologies Rev 1 2 Solution Overview 1 2 1 e OpenStack and OpenFlow seamless integration Mellanox implements advanced provisioning logic to translate cloud service level definitions and requirements to network provisioning commands Mellanox supports the latest OpenStack release and utilizes industry standard protocol Quantum API to integrate the Open Stack cloud management platform and Mellanox network devices Both OpenStack and OpenFlow applications use the same eSwitch component on the ConnectX
28. sults in significant savings in CPU cycles With a more efficient system in place those saved CPU cycles can be used to accelerate application performance In the following diagram it is clear that by performing hardware offload of the data transfers using the iSER protocol the full capacity of the link is utilized to the maximum of the PCIe limit To summarize network performance is a significant element in the overall delivery of data center services and benefits from high speed interconnects Unfortunately the high CPU overhead associated with traditional storage adapters prevents systems from taking full advantage of these high speed interconnects The 1SER protocol uses RDMA to shift data movement tasks to the network adapter and thus frees up CPU cycles that would otherwise be consumed executing traditional TCP and iSCSI protocols Hence using RDMA based fast interconnects significantly increases data center application performance levels Figure 4 RDMA Acceleration 7000 dz PCle Limit 6000 5000 Ww iSER Phsical Write 4000 iSER A VMs Write lt r iSER 8 VMs Write 6X 3000 gt iSER 16 VMs Write H iSCSI Write 8 vms 4 ZS 2000 iSCSI Write 16 VMs 1000 ZEN M ei o 1 2 4 8 16 32 64 128 256 I O Size KB 13 Mellanox Technologies Rev 1 2 Network Virtualization on Mellanox Adapters 3 Network Virtualization on Mellanox Adapters Single Root IO Virtualization SR IOV
29. switches which can support only relatively small numberof flows om o ooo Mellanox Technologies Confidential Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 Figure 2 OpenFlow Architecture VM VM VM B ED OS dL OS R be SLAM gt Q OpenFlow cu Hypervisor etwork Controlle LE e ge ga Rz 8 zs PF controller SA OpenFlow B OpenFlow protocol N r SR IOV Host In general any OpenFlow controller for example Floodlight can be used to interface the OpenFlow agent on ConnectX 3 adapters as long as the OpenFlow protocol versions are compatible 12 2 Supported Features The following OpenFlow Match fields are supported e Destination MAC address e VLAN ID e Ether Type e Source Destination IP address e Source Destination UDP TCP port Notes o Field bitmask is not supported o Destination MAC must be included The following OpenFlow Action fields are supported e Drop providing security e Set queue providing QoS on fabric port to which egress queue the flow is steered Flow Counters are currently not supported Roadmap 11 Mellanox Technologies Rev 1 2 Accelerating Storage 2 Accelerating Storage Data centers rely on communication between compute and storage nodes as compute servers read and write data from the storage servers constantly In order to maximize the server s application
30. umentation Reference Location Mellanox OFED User Manual www mellanox com Products Adapter IB VPI SW Linux SW Drivers http www mellanox com content pages php pg products dy n amp product familyz26 amp menu sectionz34 Mellanox software source packages https github com mellanox openstack Mellanox OpenStack wiki page https wiki openstack org wiki Mellanox OpenStack Mellanox approved cables http www mellanox com related docs user_manuals Mellan ox_approved_cables pdf Mellanox Ethernet Switch Systems http www mellanox com related docs user_manuals SX10X User Manual X_User_Manual pdf Mellanox Ethernet adapter cards http www mellanox com page ethernet_cards_overview Solutions space on Mellanox http community mellanox com community support solutions community OpenFlow RPM package http community mellanox com docs DOC 1188 EEN Mellanox Technologies Rev 1 2 Solution Overview OpenStack RPM package http community mellanox com docs DOC 1 187 Mellanox eSwitchd Installation for http community mellanox com docs DOC 1 126 OpenFlow and OpenStack Troubleshooting http community mellanox com docs DOC 1 127 Mellanox OFED Driver Installation http community mellanox com docs DOC 1317 and Configuration for SR IOV VL Mellanox Technologies Confidential Mellanox OpenStack and SDN OpenFlow Solution Reference Architecture Rev 1 2 1 Solution Overview 1 1 OpenStack Deploying and ma

Download Pdf Manuals

image

Related Search

Related Contents

Best K260A series User's Manual    User Manual - Future Design Controls  取扱説明書  JQA Product Safety Certification Scheme (S  HorseTrak 2003 Manual (right click to save)  Maya - Ranch Computing  ツーバイリプ 取扱説明書  MSDS受領書 FAXNo.052-443-4825  Mentorat couleur - Secrétariat du Conseil du trésor  

Copyright © All rights reserved.
Failed to retrieve file