My my, I’ve been meaning to make a few SDN posts for ages! Fortunately my golf game got cancelled today and I figured it was finally time to get this started.
Explaining Stuff
Like “Cloud” the term Software Defined Networking covers a lot of things so today we’ll be looking at the decoupling the control plane from the data plane part of it. If you need a refresher the control plane handles the intelligence in a router/switch such as the routing table and the data plane is concerned with forwarding decisions.
Here is a pretty picture I threw together to show the divide.
With SDN we are moving the control plane to a SDN controller so it makes all the decisions and the switches just concern themselves with forwarding. This also means that the switches can be dumber (and cheaper) switches than you might otherwise buy.
Here is a picture that shows the concept, the northbound interface is where we input directions. The API connects to the SDN controller which then uses OpenFlow (usually) to communicate with the switches.
Today we’ll be playing with OpenFlow Manager from Cisco’s DevNet, it connects to the OpenDaylight SDN controller which then will connect to a OpenVSwitch that will be placed between a few routers.
Topology
Today’s topology will be two Ubuntu Servers, one will run OpenDaylight and Cisco’s Openflow Manager, and the other will run OpenVSwitch so it can connect our routers.
Router wise I’m using two CSRs and two VyOS but the routers won’t matter all that much.
Installation & Configurations
OpenDaylight
Installing OpenDaylight is pretty easy because it runs in a self contained folder! But the downside it is that it requires JAVA (BOO!!!!) So I’ll install OpenJDK and set the $JAVA_HOME directory so it doesn’t complain.
sudo apt-get install -y openjdk-8-jre sudo apt-get install -y openjdk-8-jdk export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64/
Now that is done we’ll just pull the latest version of OpenDaylight by using wget.
wget https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/integration/distribution-karaf/0.5.4-Boron-SR4/distribution-karaf-0.5.4-Boron-SR4.tar.gz
Then we simply just need to exact the folder and we’re done for now
tar zxvf distribution-karaf-0.5.4-Boron-SR4.tar.gz
We can launch OpenDaylight by opening karif
the-packet-thrower@SDN01:~/distribution-karaf-0.5.4-Boron-SR4$ ./bin/karaf Apache Karaf starting up. Press Enter to open the shell now... 8% [=====> ] ________ ________ .__ .__ .__ __ \_____ \ ______ ____ ____ \______ \ _____ ___.__.| | |__| ____ | |___/ |_ / | \\____ \_/ __ \ / \ | | \\__ \< | || | | |/ ___\| | \ __\ / | \ |_> > ___/| | \| ` \/ __ \\___ || |_| / /_/ > Y \ | \_______ / __/ \___ >___| /_______ (____ / ____||____/__\___ /|___| /__| \/|__| \/ \/ \/ \/\/ /_____/ \/ Hit '' for a list of available commands and '[cmd] --help' for help on a specific command. Hit '' or type 'system:shutdown' or 'logout' to shutdown OpenDaylight.
OpenDaylight doesn’t run much out of the gate so we are going to install the following features to support everything we are doing today. The DLUX feature is the web interface so you’ll need at least that if you just want to poke around.
opendaylight-user@root>feature:install odl-restconf-all opendaylight-user@root>feature:install odl-l2switch-all opendaylight-user@root>feature:install odl-openflowplugin-all-he opendaylight-user@root>feature:install odl-dlux-all opendaylight-user@root>feature:install odl-mdsal-all
At this point we can just stay in the console and open up a new tab to do the OpenFlow Manager (or ya know run Karaf outside of console mode)
OpenFlow Manager
For OpenFlow Manager we’ll just clone the git repository, which means we’ll need to install git first if we don’t have it.
sudo apt-get install git -y git clone https://github.com/CiscoDevNet/OpenDaylight-Openflow-App.git
We will also need to install Node.JS and NPM for Grunt (the web server they use) to work
sudo apt-get install -y npm sudo apt-get install -y nodejs sudo apt-get install -y nodejs-legacy
Then we need to go under the directory and run npm install
cd OpenDaylight-Openflow-App/ sudo npm install sudo npm install gruntfile --save
Next we will need to adjust the configuration file so that it matches our environment, I’m running OpenFlow on the same server as OpenDaylight but hardcoded the IP for the fun of it.
the-packet-thrower@SDN01:~$ cat OpenDaylight-Openflow-App/ofm/src/common/config/env.module.js define(['angularAMD'], function(ng) { 'use strict'; var config = angular.module('config', []) .constant('ENV', { baseURL: "http://10.10.2.245:", adSalPort: "8181", mdSalPort : "8181", ofmPort : "8181", configEnv : "ENV_DEV", odlUserName: 'admin', odlUserPassword: 'admin', getBaseURL : function(salType){ if(salType!==undefined){ var urlPrefix = ""; if(this.configEnv==="ENV_DEV"){ urlPrefix = this.baseURL; }else{ urlPrefix = window.location.protocol+"//"+window.location.hostname+":"; } if(salType==="AD_SAL"){ return urlPrefix + this.adSalPort; }else if(salType==="MD_SAL"){ return urlPrefix + this.mdSalPort; }else if(salType==="CONTROLLER"){ return urlPrefix + this.ofmPort; } } //default behavior return "";
Now to run it we just need to type grunt inside the folder
the-packet-thrower@SDN01:~$ cd OpenDaylight-Openflow-App/ the-packet-thrower@SDN01:~/OpenDaylight-Openflow-App$ grunt Running "connect:def" (connect) task Waiting forever... Started connect web server on http://localhost:9000
OpenVSwitch
Now the controllers are out of the way it is time to setup OpenVSwitch, we just need to install openvswitch and it doesn’t hurt to have bridge-utils as well.
sudo apt-get update -y && sudo apt-get upgrade -y sudo apt-get install openvswitch-switch bridge-utils
Currently the two CSR routers are connected to OVS01’s ENS38 and ENS39 interfaces, we’ll leave the VyOS routers disconnected for now.
root@OVS01:~# ifconfig -a br0 Link encap:Ethernet HWaddr ee:42:19:98:2d:40 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ens33 Link encap:Ethernet HWaddr 00:0c:29:bc:ce:6d inet addr:10.10.2.246 Bcast:10.10.2.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:febc:ce6d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:259 errors:0 dropped:2 overruns:0 frame:0 TX packets:62 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:24026 (24.0 KB) TX bytes:11071 (11.0 KB) ens38 Link encap:Ethernet HWaddr 00:0c:29:bc:ce:77 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ens39 Link encap:Ethernet HWaddr 00:0c:29:bc:ce:81 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
SDN Start your engines!
Just to make sure everything is clean we’ll make sure there is no br0 in the system.
root@OVS01:~# ovs-vsctl del-br br0 root@OVS01:~# ovs-vsctl show 3fcaff8c-44ed-442e-98bc-c13db4f1366d ovs_version: "2.5.2"
Currently we can see the routers are not able to communicate with each other.
R01(config-if)#do ping 10.0.0.2 rep 1000 Type escape sequence to abort. Sending 1000, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds: ........ Success rate is 0 percent (0/8)
First thing we need to do is create a new bridge and add the router ports into it by using the ovs-vsctl commands.
root@OVS01:~# ovs-vsctl add-br br0 root@OVS01:~# ovs-vsctl add-port br0 ens38 root@OVS01:~# ovs-vsctl add-port br0 ens39 root@OVS01:~# ovs-vsctl show 3fcaff8c-44ed-442e-98bc-c13db4f1366d Bridge "br0" Port "ens38" Interface "ens38" Port "br0" Interface "br0" type: internal Port "ens39" Interface "ens39" ovs_version: "2.5.2"
To point the OVS to OpenDaylight we need to tell it to use a controller and where it is.
If you see is_connected: true then it has successfully registered with OpenDaylight, if not then you may have some troubleshooting to do.
root@OVS01:~# ovs-vsctl set-controller br0 tcp:10.10.2.245:6633 root@OVS01:~# root@OVS01:~# ovs-vsctl show 3fcaff8c-44ed-442e-98bc-c13db4f1366d Bridge "br0" Controller "tcp:10.10.2.245:6633" is_connected: true Port "ens38" Interface "ens38" Port "br0" Interface "br0" type: internal Port "ens39" Interface "ens39" ovs_version: "2.5.2"
Open a browser to http://10.10.2.245:8181/index.html#/topology (ip of your SDN01) and login with admin/admin. After a bit you should see two hosts and a controller, you can drag the diagram around to make it more readable.
Note: OpenDaylight just sees the routers as regular hosts
We can also dive into some more detail by looking at the Node view.
On the OVS node we can see what flows it has learned from the controller and other statistics. The ovs-ofctl dump-flows shows us the flow table which is similar to a routing table, it tells us what the switch is going to do with each flow that the controller has learned.
The flows are learned by the rules with the CONTROLLER action
root@OVS01:~# ovs-ofctl dump-flows ovs-ofctl: 'dump-flows' command requires at least 1 arguments root@OVS01:~# ovs-ofctl dump-flows br0 NXST_FLOW reply (xid=0x4): cookie=0x2b00000000000000, duration=1218.413s, table=0, n_packets=0, n_bytes=0, idle_age=1218, priority=100,dl_type=0x88cc actions=CONTROLLER:65535 cookie=0x2a00000000000000, duration=226.504s, table=0, n_packets=2022, n_bytes=230288, idle_timeout=600, hard_timeout=300, idle_age=44, priority=10,dl_src=00:0c:29:5d:7c:2d,dl_dst=00:0c:29:0b:1f:9f actions=output:2 cookie=0x2a00000000000001, duration=226.501s, table=0, n_packets=2020, n_bytes=230148, idle_timeout=600, hard_timeout=300, idle_age=44, priority=10,dl_src=00:0c:29:0b:1f:9f,dl_dst=00:0c:29:5d:7c:2d actions=output:1 cookie=0x2b00000000000000, duration=1216.772s, table=0, n_packets=45, n_bytes=4702, idle_age=8, priority=2,in_port=2 actions=output:1,CONTROLLER:65535 cookie=0x2b00000000000001, duration=1216.772s, table=0, n_packets=43, n_bytes=4460, idle_age=8, priority=2,in_port=1 actions=output:2,CONTROLLER:65535 cookie=0x2b00000000000000, duration=1218.427s, table=0, n_packets=0, n_bytes=0, idle_age=1218, priority=0 actions=drop
Port statistics can be viewed with the ovs-ofctl dump-ports command, both these commands be filtered by OpenFlow version etc to show what you care about.
root@OVS01:~# ovs-ofctl -O OpenFlow13 dump-ports br0 OFPST_PORT reply (OF1.3) (xid=0x2): 3 ports port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0 tx pkts=0, bytes=0, drop=0, errs=0, coll=0 duration=2123.821s port 1: rx pkts=2140, bytes=241778, drop=0, errs=0, frame=0, over=0, crc=0 tx pkts=2436, bytes=272796, drop=0, errs=0, coll=0 duration=2108.058s port 2: rx pkts=2139, bytes=241803, drop=0, errs=0, frame=0, over=0, crc=0 tx pkts=2436, bytes=272666, drop=0, errs=0, coll=0 duration=2105.896s
Now that OpenDaylight is out of the way, open up another tab and go to http://10.10.2.245:9000 to see OpenFlow Manager, you should see a similar topology (though it is a bit prettier) and it will also have a Flow Management tab.
The Flow Management tab lets us view flows and also make new ones. You can view an existing rule by clicking the little eye ball.
Notice that the rules hear have a priority of 2, OpenFlow follows the highest priority rules first.
At this point the CSR routers should be able to reach each other, if not you should sort out your VIRL/GNS3/Hypervisor before continuing. All good? Ok let’s break it! We’ll add a new flow that drops frames received on R01’s port 1.
The Show Preview button will display the actual config that will be sent to the switch, then we can use the Send Request to send it on.
Once the config is applied we can see it on OVS
root@OVS01:~# ovs-ofctl -O OpenFlow13 dump-flows br0 OFPST_FLOW reply (OF1.3) (xid=0x2): cookie=0x2b00000000000000, duration=2881.584s, table=0, n_packets=228, n_bytes=21820, priority=2,in_port=2 actions=output:1,CONTROLLER:65535 cookie=0x2b00000000000001, duration=2881.584s, table=0, n_packets=216, n_bytes=20688, priority=2,in_port=1 actions=output:2,CONTROLLER:65535 cookie=0x0, duration=120.361s, table=0, n_packets=22, n_bytes=2228, priority=9999,in_port=1 actions=drop cookie=0x2b00000000000000, duration=2883.225s, table=0, n_packets=0, n_bytes=0, priority=100,dl_type=0x88cc actions=CONTROLLER:65535 cookie=0x2b00000000000000, duration=2883.239s, table=0, n_packets=0, n_bytes=0, priority=0 actions=drop
On R01 we can no longer ping R02
R01#ping 10.0.0.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds: ..... Success rate is 0 percent (0/5)
If we delete the rule we made, we can ping again!
R01(config-if)#do ping 10.0.0.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/9/18 ms
Let’s bring the VyOS routers into the mix so we have more to play with.
root@OVS01:~# ifconfig ens40 up root@OVS01:~# ifconfig ens41 up root@OVS01:~# ovs-vsctl add-port br0 ens40 root@OVS01:~# ovs-vsctl add-port br0 ens41 root@OVS01:~# ovs-vsctl show 3fcaff8c-44ed-442e-98bc-c13db4f1366d Bridge "br0" Controller "tcp:10.10.2.245:6633" is_connected: true Port "ens41" Interface "ens41" Port "ens38" Interface "ens38" Port "br0" Interface "br0" type: internal Port "ens40" Interface "ens40" Port "ens39" Interface "ens39" ovs_version: "2.5.2"
Right now we have 4 routers in the same subnet that are connected to a flat switch. We’ll use the output action to make it so R01 can only talk to R02 and vice versa then R03 can only talk to R04 and vice versa. We’ll also enable OSPF on all routers to make sure things are only communicating where they should be.
R01(config-if)#do ping 10.0.0.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/8/18 ms R01(config-if)#do ping 10.0.0.3 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.0.3, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/5/10 ms R01(config-if)#do ping 10.0.0.4 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.0.4, timeout is 2 seconds: !!!!! *Jul 16 19:50:58.050: %OSPF-5-ADJCHG: Process 1, Nbr 10.10.2.192 on GigabitEthernet2 from LOADING to FULL, Loading Done *Jul 16 19:53:30.798: %OSPF-5-ADJCHG: Process 1, Nbr 10.10.2.194 on GigabitEthernet2 from LOADING to FULL, Loading Done *Jul 16 19:53:40.168: %OSPF-5-ADJCHG: Process 1, Nbr 10.10.2.193 on GigabitEthernet2 from LOADING to FULL, Loading Done
If we look at OpenDaylight we should see 4 host devices before we continue on. Once ODL is caught up OFM should be too
Starting with R01 to R02 we’ll make a flow that sends traffic from port 1 to port 2
Once it is applied we only talk to R02 but R02 can still talk to everybody.
R01(config-if)#do ping 10.0.0.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/9/18 ms R01(config-if)#do ping 10.0.0.3 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.0.3, timeout is 2 seconds: ..... Success rate is 0 percent (0/5)
When we are done we should only have one OSPF neighbor per router.
R01(config-if)#do sh ip ospf ne Neighbor ID Pri State Dead Time Address Interface 10.10.2.192 1 FULL/DR 00:00:36 10.0.0.2 GigabitEthernet2 vyos@VyOS01# run show ip ospf neighbor Neighbor ID Pri State Dead Time Address Interface RXmtL RqstL DBsmL 10.10.2.194 1 Full/DR 36.543s 10.0.0.4 eth1:10.0.0.3 0 0 0 [edit]
HPE SDN Controller
There are several different SDN controllers on the market that each have their own pro’s and con’s, HPE makes a decent one that you can install in your lab since it is a honor based license. HPE also offer trials for its various SDN applications that run on the controller. We’ll briefly look at Flow Maker Deluxe appliation.
The SDN controller is actually an appliance nowadays so installing is just a matter of deploying the OVA and setting the proper IP.
Though to turn what I just said into a lie we will need to edit the controller config a bit to run Flow Maker Deluxe. We need to add -Dsdn.signedJar=none \ to the $JMX_OPTS \ section of the dmk.sh file and restart the service.
sdn@SDNC01:~$ sudo service sdnc stop
sdn@SDNC01:~$ sudo vi /opt/sdn/virgo/bin/dmk.sh cd $KERNEL_HOME; exec $JAVA_EXECUTABLE \ $JAVA_OPTS \ $DEBUG_OPTS \ $JMX_OPTS \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:ErrorFile=$KERNEL_HOME/serviceability/error.log \ -XX:HeapDumpPath=$KERNEL_HOME/serviceability/heap_dump.hprof \ -Dsdn.signedJar=none \ sdn@SDNC01:~$ sudo service sdnc start
Once we are into the actual controller we’ll find a similar topology page that ODL and OFM has under OpenFlow Topology.
This page is more functional than other controllers since it lets us do things like path trace to show what path traffic will take.
Flow Maker Deluxe is a for $$$ application that can view and edit flows like Cisco’s OFM can. We’ll add similar rules like we did above.
When we are done it will look like this, one thing to point out is that HPE automatically adds a couple hundred rules to allow for flexiblity and to make ovs-ofctl output bloated 🙂
HPE also discovers by watching ARP, BDDP, and DHCP traffic so you may need to generate such traffic before HP sees it.
root@OVS01:~# ovs-ofctl dump-flows br0 NXST_FLOW reply (xid=0x4): cookie=0x633f0000faded000, duration=1236.662s, table=0, n_packets=0, n_bytes=0, idle_age=1236, priority=57343,dl_type=0x8999 actions=CONTROLLER:65535 cookie=0xe8cd0000babadada, duration=1236.662s, table=0, n_packets=42, n_bytes=2520, idle_age=852, priority=8191,arp actions=CONTROLLER:65535,NORMAL cookie=0xfabe0000babadada, duration=1236.662s, table=0, n_packets=0, n_bytes=0, idle_age=1236, priority=28671,udp,tp_src=68,tp_dst=67 actions=CONTROLLER:65535,NORMAL cookie=0xfabe0000babadada, duration=1236.662s, table=0, n_packets=0, n_bytes=0, idle_age=1236, priority=28671,udp,tp_src=67,tp_dst=68 actions=CONTROLLER:65535,NORMAL cookie=0xabab015d4da28a56, duration=960.698s, table=0, n_packets=118, n_bytes=11103, idle_age=1, priority=100,in_port=1 actions=output:2 cookie=0xabab015d4da2d726, duration=941.034s, table=0, n_packets=116, n_bytes=10847, idle_age=6, priority=100,in_port=2 actions=output:1 cookie=0xabab015d4da3997b, duration=891.290s, table=0, n_packets=142, n_bytes=12480, idle_age=8, priority=100,in_port=3 actions=output:4 cookie=0xabab015d4da3cea7, duration=877.678s, table=0, n_packets=126, n_bytes=10908, idle_age=10, priority=100,in_port=4 actions=output:3 cookie=0xffff000000000000, duration=1236.662s, table=0, n_packets=483, n_bytes=45579, idle_age=878, priority=0 actions=resubmit(,1) cookie=0xffff000000000000, duration=1236.662s, table=1, n_packets=483, n_bytes=45579, idle_age=878, priority=0 actions=resubmit(,2) cookie=0xffff000000000000, duration=1236.662s, table=2, n_packets=483, n_bytes=45579, idle_age=878, priority=0 actions=resubmit(,3) cookie=0xffff000000000000, duration=1236.662s, table=3, n_packets=483, n_bytes=45579, idle_age=878, priority=0 actions=resubmit(,4)
And we can see we have the same results as before
R01(config-if)#do sh ip ospf ne Neighbor ID Pri State Dead Time Address Interface 10.10.2.192 1 FULL/DR 00:00:30 10.0.0.2 GigabitEthernet2 vyos@VyOS01# run show ip ospf neighbor Neighbor ID Pri State Dead Time Address Interface RXmtL RqstL DBsmL 10.10.2.194 1 Full/DR 33.013s 10.0.0.4 eth1:10.0.0.3 0 0 0 [edit]
Mininet
Mininet is a handy tool that can spawn openflow hosts and switches for labbing purposes. The easiest way to install it is to use the Mininet VM but you can build it by source if you want to.
Mininet can make a OpenFlow by just entering a single command, this example will build 5 switches in a line topology with 5 hosts connected to them. We can also change the topology or write a python file if you want to make a more custom topology for your lab. The pingall command will generate traffic so the controller can learn about them.
mininet@mininet-vm:~$ sudo mn --topo linear,5 --mac --controller=remote,ip=10.10.2.245,port=6633 --switch ovs,protocols=OpenFlow13 *** Creating network *** Adding controller *** Adding hosts: h1 h2 h3 h4 h5 *** Adding switches: s1 s2 s3 s4 s5 *** Adding links: (h1, s1) (h2, s2) (h3, s3) (h4, s4) (h5, s5) (s2, s1) (s3, s2) (s4, s3) (s5, s4) *** Configuring hosts h1 h2 h3 h4 h5 *** Starting controller c0 *** Starting 5 switches s1 s2 s3 s4 s5 ... *** Starting CLI: mininet> pingall *** Ping: testing ping reachability h1 -> X h3 h4 h5 h2 -> h1 h3 h4 h5 h3 -> h1 h2 h4 h5 h4 -> h1 h2 h3 h5 h5 -> h1 h2 h3 h4 *** Results: 5% dropped (19/20 received) mininet> pingall *** Ping: testing ping reachability h1 -> h2 h3 h4 h5 h2 -> h1 h3 h4 h5 h3 -> h1 h2 h4 h5 h4 -> h1 h2 h3 h5 h5 -> h1 h2 h3 h4 *** Results: 0% dropped (20/20 received)
We can also do various host to host actions aside from the ping we did, we can do everything from traceroutes to iperf tests with Mininet, here is an example running a web server a host and doing a wget on another.
mininet> h1 python -m SimpleHTTPServer 80 & mininet> h2 get -O - h1 bash: get: command not found mininet> h2 wget -O - h1 --2017-07-17 06:11:13-- http://10.0.0.1/ Connecting to 10.0.0.1:80... connected. HTTP request sent, awaiting response... 200 OK Length: 914 Saving to: ‘STDOUT’ 0% [ ] 0 --.-K/s <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html> <title>Directory listing for /</title> <body> <h2>Directory listing for /</h2> <hr> <ul> <li><a href=".bash_history">.bash_history</a> <li><a href=".bash_logout">.bash_logout</a> <li><a href=".bashrc">.bashrc</a> <li><a href=".cache/">.cache/</a> <li><a href=".gitconfig">.gitconfig</a> <li><a href=".mininet_history">.mininet_history</a> <li><a href=".profile">.profile</a> <li><a href=".rnd">.rnd</a> <li><a href=".viminfo">.viminfo</a> <li><a href=".wireshark/">.wireshark/</a> <li><a href="install-mininet-vm.sh">install-mininet-vm.sh</a> <li><a href="loxigen/">loxigen/</a> <li><a href="mininet/">mininet/</a> <li><a href="oflops/">oflops/</a> <li><a href="oftest/">oftest/</a> <li><a href="openflow/">openflow/</a> <li><a href="pox/">pox/</a> <li><a href="tree.py">tree.py</a> <li><a href="triangle.py">triangle.py</a> </ul> <hr> </body> </html> 100%[======================================>] 914 --.-K/s in 0s 2017-07-17 06:11:13 (2.00 MB/s) - written to stdout [914/914]
Python
Looks wrap things up by looking at a simple python topology, we will add three switches in a triangle topology and have two hosts.
mininet@mininet-vm:~$ vi triangle.py from mininet.topo import Topo class MyTopo( Topo ): "Simple topology example." def __init__( self ): "Create custom topo." # Initialize topology Topo.__init__( self ) # Add hosts and switches leftHost = self.addHost( 'h1' ) rightHost = self.addHost( 'h2' ) topSwitch = self.addSwitch( 's1' ) leftSwitch = self.addSwitch( 's2' ) rightSwitch = self.addSwitch( 's3' ) # Add links self.addLink( leftHost, leftSwitch ) self.addLink( topSwitch, leftSwitch ) self.addLink( topSwitch, rightSwitch ) self.addLink( leftSwitch, rightSwitch ) self.addLink( rightSwitch, rightHost ) topos = { 'mytopo': ( lambda: MyTopo() ) } ~ "triangle.py" 28L, 757C written mininet@mininet-vm:~$ mininet@mininet-vm:~$ sudo mn --custom triangle.py --topo mytopo --controller=remote,ip=10.10.2.245,port=6633 --switch ovs,protocols=OpenFlow13 *** Creating network *** Adding controller *** Adding hosts: h1 h2 *** Adding switches: s1 s2 s3 *** Adding links: (h1, s2) (s1, s2) (s1, s3) (s2, s3) (s3, h2) *** Configuring hosts h1 h2 *** Starting controller c0 *** Starting 3 switches s1 s2 s3 ... *** Starting CLI: mininet> pingall *** Ping: testing ping reachability h1 -> X h2 -> h1 *** Results: 50% dropped (1/2 received) mininet> pingall *** Ping: testing ping reachability h1 -> h2 h2 -> h1 *** Results: 0% dropped (2/2 received)
Conclusion
This has been a quick intro to the fun world of SDN, next time I’ll probably look at using real devices instead of OpenVSwitch and might start poking at the python / rest api side of things. Then we’ll probably play with ACI and NSX.