2 This is an example topology for the distributed cloud emulator (dcemulator).
3 (c) 2015 by Manuel Peuster <manuel.peuster@upb.de>
6 This is an example that shows how a user of the emulation tool can
7 define network topologies with multiple emulated cloud data centers.
9 The definition is done with a Python API which looks very similar to the
10 Mininet API (in fact it is a wrapper for it).
12 We only specify the topology *between* data centers not within a single
13 data center (data center internal setups or placements are not of interest,
14 we want to experiment with VNF chains deployed across multiple PoPs).
16 The original Mininet API has to be completely hidden and not be used by this
20 from mininet
.log
import setLogLevel
21 from emuvim
.dcemulator
.net
import DCNetwork
22 from emuvim
.api
.zerorpc
.compute
import ZeroRpcApiEndpoint
23 from emuvim
.api
.zerorpc
.network
import ZeroRpcApiEndpointDCNetwork
25 logging
.basicConfig(level
=logging
.INFO
)
28 def create_topology1():
30 1. Create a data center network object (DCNetwork)
35 1b. add a monitoring agent to the DCNetwork
37 mon_api
= ZeroRpcApiEndpointDCNetwork("0.0.0.0", 5151)
38 mon_api
.connectDCNetwork(net
)
41 2. Add (logical) data centers to the topology
42 (each data center is one "bigswitch" in our simplified
45 dc1
= net
.addDatacenter("datacenter1")
46 dc2
= net
.addDatacenter("datacenter2")
47 dc3
= net
.addDatacenter("long_data_center_name3")
48 dc4
= net
.addDatacenter(
50 metadata
={"mydata": "we can also add arbitrary metadata to each DC"})
53 3. You can add additional SDN switches for data center
54 interconnections to the network.
56 s1
= net
.addSwitch("s1")
59 4. Add links between your data centers and additional switches
60 to define you topology.
61 These links can use Mininet's features to limit bw, add delay or jitter.
64 net
.addLink("datacenter1", s1
)
66 net
.addLink(s1
, "datacenter4")
69 5. We want to access and control our data centers from the outside,
70 e.g., we want to connect an orchestrator to start/stop compute
71 resources aka. VNFs (represented by Docker containers in the emulated)
73 So we need to instantiate API endpoints (e.g. a zerorpc or REST
74 interface). Depending on the endpoint implementations, we can connect
75 one or more data centers to it, which can then be controlled through
76 this API, e.g., start/stop/list compute instances.
78 # create a new instance of a endpoint implementation
79 zapi1
= ZeroRpcApiEndpoint("0.0.0.0", 4242)
80 # connect data centers to this endpoint
81 zapi1
.connectDatacenter(dc1
)
82 zapi1
.connectDatacenter(dc2
)
83 zapi1
.connectDatacenter(dc3
)
84 zapi1
.connectDatacenter(dc4
)
85 # run API endpoint server (in another thread, don't block)
89 5.1. For our example, we create a second endpoint to illustrate that
90 this is supported by our design. This feature allows us to have
91 one API endpoint for each data center. This makes the emulation
92 environment more realistic because you can easily create one
93 OpenStack-like REST API endpoint for *each* data center.
94 This will look like a real-world multi PoP/data center deployment
95 from the perspective of an orchestrator.
97 zapi2
= ZeroRpcApiEndpoint("0.0.0.0", 4343)
98 zapi2
.connectDatacenter(dc3
)
99 zapi2
.connectDatacenter(dc4
)
103 6. Finally we are done and can start our network (the emulator).
104 We can also enter the Mininet CLI to interactively interact
105 with our compute resources (just like in default Mininet).
106 But we can also implement fully automated experiments that
107 can be executed again and again.
111 # when the user types exit in the CLI, we stop the emulator
116 setLogLevel('info') # set Mininet loglevel
120 if __name__
== '__main__':