| peusterm | cbcd4c2 | 2015-12-28 11:33:42 +0100 | [diff] [blame] | 1 | """ |
| 2 | This is an example topology for the distributed cloud emulator (dcemulator). |
| 3 | (c) 2015 by Manuel Peuster <manuel.peuster@upb.de> |
| 4 | |
| peusterm | e4e89d3 | 2016-01-07 09:14:54 +0100 | [diff] [blame] | 5 | |
| 6 | This is an example that shows how a user of the emulation tool can |
| 7 | define network topologies with multiple emulated cloud data centers. |
| 8 | |
| 9 | The definition is done with a Python API which looks very similar to the |
| 10 | Mininet API (in fact it is a wrapper for it). |
| 11 | |
| 12 | We only specify the topology *between* data centers not within a single |
| 13 | data center (data center internal setups or placements are not of interest, |
| 14 | we want to experiment with VNF chains deployed across multiple PoPs). |
| 15 | |
| peusterm | cbcd4c2 | 2015-12-28 11:33:42 +0100 | [diff] [blame] | 16 | The original Mininet API has to be completely hidden and not be used by this |
| 17 | script. |
| 18 | """ |
| 19 | import logging |
| 20 | from dcemulator.net import DCNetwork |
| peusterm | 9c252b6 | 2016-01-06 16:59:53 +0100 | [diff] [blame] | 21 | from api.zerorpcapi import ZeroRpcApiEndpoint |
| peusterm | cbcd4c2 | 2015-12-28 11:33:42 +0100 | [diff] [blame] | 22 | |
| 23 | logging.basicConfig(level=logging.DEBUG) |
| 24 | |
| 25 | |
| 26 | def create_topology1(): |
| peusterm | e4e89d3 | 2016-01-07 09:14:54 +0100 | [diff] [blame] | 27 | """ |
| 28 | 1. Create a data center network object (DCNetwork) |
| 29 | """ |
| peusterm | cbcd4c2 | 2015-12-28 11:33:42 +0100 | [diff] [blame] | 30 | net = DCNetwork() |
| 31 | |
| peusterm | e4e89d3 | 2016-01-07 09:14:54 +0100 | [diff] [blame] | 32 | """ |
| 33 | 2. Add (logical) data centers to the topology |
| 34 | (each data center is one "bigswitch" in our simplified |
| 35 | first prototype) |
| 36 | """ |
| peusterm | cbcd4c2 | 2015-12-28 11:33:42 +0100 | [diff] [blame] | 37 | dc1 = net.addDatacenter("dc1") |
| 38 | dc2 = net.addDatacenter("dc2") |
| 39 | dc3 = net.addDatacenter("dc3") |
| 40 | dc4 = net.addDatacenter("dc4") |
| peusterm | e4e89d3 | 2016-01-07 09:14:54 +0100 | [diff] [blame] | 41 | |
| 42 | """ |
| 43 | 3. You can add additional SDN switches for data center |
| 44 | interconnections to the network. |
| 45 | """ |
| peusterm | cbcd4c2 | 2015-12-28 11:33:42 +0100 | [diff] [blame] | 46 | s1 = net.addSwitch("s1") |
| peusterm | e4e89d3 | 2016-01-07 09:14:54 +0100 | [diff] [blame] | 47 | |
| 48 | """ |
| 49 | 4. Add links between your data centers and additional switches |
| 50 | to define you topology. |
| 51 | These links can use Mininet's features to limit bw, add delay or jitter. |
| 52 | """ |
| peusterm | cbcd4c2 | 2015-12-28 11:33:42 +0100 | [diff] [blame] | 53 | net.addLink(dc1, dc2) |
| 54 | net.addLink("dc1", s1) |
| 55 | net.addLink(s1, "dc3") |
| 56 | net.addLink(s1, dc4) |
| peusterm | 9c252b6 | 2016-01-06 16:59:53 +0100 | [diff] [blame] | 57 | |
| peusterm | e4e89d3 | 2016-01-07 09:14:54 +0100 | [diff] [blame] | 58 | """ |
| 59 | 5. We want to access and control our data centers from the outside, |
| peusterm | 5b844a1 | 2016-01-11 15:58:15 +0100 | [diff] [blame] | 60 | e.g., we want to connect an orchestrator to start/stop compute |
| peusterm | e4e89d3 | 2016-01-07 09:14:54 +0100 | [diff] [blame] | 61 | resources aka. VNFs (represented by Docker containers in the emulated) |
| 62 | |
| 63 | So we need to instantiate API endpoints (e.g. a zerorpc or REST |
| 64 | interface). Depending on the endpoint implementations, we can connect |
| 65 | one or more data centers to it, which can then be controlled through |
| 66 | this API, e.g., start/stop/list compute instances. |
| 67 | """ |
| 68 | # create a new instance of a endpoint implementation |
| peusterm | 9c252b6 | 2016-01-06 16:59:53 +0100 | [diff] [blame] | 69 | zapi1 = ZeroRpcApiEndpoint("0.0.0.0", 4242) |
| peusterm | e4e89d3 | 2016-01-07 09:14:54 +0100 | [diff] [blame] | 70 | # connect data centers to this endpoint |
| peusterm | 9c252b6 | 2016-01-06 16:59:53 +0100 | [diff] [blame] | 71 | zapi1.connectDatacenter(dc1) |
| 72 | zapi1.connectDatacenter(dc2) |
| peusterm | e4e89d3 | 2016-01-07 09:14:54 +0100 | [diff] [blame] | 73 | # run API endpoint server (in another thread, don't block) |
| peusterm | 9c252b6 | 2016-01-06 16:59:53 +0100 | [diff] [blame] | 74 | zapi1.start() |
| peusterm | e4e89d3 | 2016-01-07 09:14:54 +0100 | [diff] [blame] | 75 | |
| 76 | """ |
| 77 | 5.1. For our example, we create a second endpoint to illustrate that |
| 78 | this is support by our design. This feature allows us to have |
| 79 | one API endpoint for each data center. This makes the emulation |
| 80 | environment more realistic because you can easily create one |
| 81 | OpenStack-like REST API endpoint for *each* data center. |
| 82 | This will look like a real-world multi PoP/data center deployment |
| 83 | from the perspective of an orchestrator. |
| 84 | """ |
| peusterm | 9c252b6 | 2016-01-06 16:59:53 +0100 | [diff] [blame] | 85 | zapi2 = ZeroRpcApiEndpoint("0.0.0.0", 4343) |
| 86 | zapi2.connectDatacenter(dc3) |
| 87 | zapi2.connectDatacenter(dc4) |
| 88 | zapi2.start() |
| 89 | |
| peusterm | e4e89d3 | 2016-01-07 09:14:54 +0100 | [diff] [blame] | 90 | """ |
| 91 | 6. Finally we are done and can start our network (the emulator). |
| 92 | We can also enter the Mininet CLI to interactively interact |
| 93 | with our compute resources (just like in default Mininet). |
| 94 | But we can also implement fully automated experiments that |
| 95 | can be executed again and again. |
| 96 | """ |
| peusterm | cbcd4c2 | 2015-12-28 11:33:42 +0100 | [diff] [blame] | 97 | net.start() |
| peusterm | e4e89d3 | 2016-01-07 09:14:54 +0100 | [diff] [blame] | 98 | net.CLI() |
| 99 | # when the user types exit in the CLI, we stop the emulator |
| 100 | net.stop() |
| peusterm | cbcd4c2 | 2015-12-28 11:33:42 +0100 | [diff] [blame] | 101 | |
| 102 | |
| 103 | def main(): |
| 104 | create_topology1() |
| 105 | |
| 106 | |
| 107 | if __name__ == '__main__': |
| 108 | main() |