2 This is an example topology for the distributed cloud emulator (dcemulator).
3 (c) 2015 by Manuel Peuster <manuel.peuster@upb.de>
6 This is an example that shows how a user of the emulation tool can
7 define network topologies with multiple emulated cloud data centers.
9 The definition is done with a Python API which looks very similar to the
10 Mininet API (in fact it is a wrapper for it).
12 We only specify the topology *between* data centers not within a single
13 data center (data center internal setups or placements are not of interest,
14 we want to experiment with VNF chains deployed across multiple PoPs).
16 The original Mininet API has to be completely hidden and not be used by this
20 from mininet
.log
import setLogLevel
21 from emuvim
.dcemulator
.net
import DCNetwork
22 from emuvim
.api
.rest
.rest_api_endpoint
import RestApiEndpoint
24 from emuvim
.api
.zerorpc
.compute
import ZeroRpcApiEndpoint
25 from emuvim
.api
.zerorpc
.network
import ZeroRpcApiEndpointDCNetwork
27 logging
.basicConfig(level
=logging
.INFO
)
30 def create_topology1():
32 1. Create a data center network object (DCNetwork)
34 net
= DCNetwork(monitor
=True, enable_learning
=False)
37 1b. add a monitoring agent to the DCNetwork
39 #keep old zeroRPC interface to test the prometheus metric query
40 mon_api
= ZeroRpcApiEndpointDCNetwork("0.0.0.0", 5151)
41 mon_api
.connectDCNetwork(net
)
44 2. Add (logical) data centers to the topology
45 (each data center is one "bigswitch" in our simplified
48 dc1
= net
.addDatacenter("datacenter1")
49 dc2
= net
.addDatacenter("datacenter2")
50 dc3
= net
.addDatacenter("long_data_center_name3")
51 dc4
= net
.addDatacenter(
53 metadata
={"mydata": "we can also add arbitrary metadata to each DC"})
56 3. You can add additional SDN switches for data center
57 interconnections to the network.
59 s1
= net
.addSwitch("s1")
62 4. Add links between your data centers and additional switches
63 to define you topology.
64 These links can use Mininet's features to limit bw, add delay or jitter.
67 net
.addLink("datacenter1", s1
)
69 net
.addLink(s1
, "datacenter4")
72 5. We want to access and control our data centers from the outside,
73 e.g., we want to connect an orchestrator to start/stop compute
74 resources aka. VNFs (represented by Docker containers in the emulated)
76 So we need to instantiate API endpoints (e.g. a zerorpc or REST
77 interface). Depending on the endpoint implementations, we can connect
78 one or more data centers to it, which can then be controlled through
79 this API, e.g., start/stop/list compute instances.
81 # keep the old zeroRPC interface for the prometheus metric query test
82 zapi1
= ZeroRpcApiEndpoint("0.0.0.0", 4242)
83 # connect data centers to this endpoint
84 zapi1
.connectDatacenter(dc1
)
85 zapi1
.connectDatacenter(dc2
)
86 # run API endpoint server (in another thread, don't block)
89 # create a new instance of a endpoint implementation
90 api1
= RestApiEndpoint("127.0.0.1", 5000)
91 # connect data centers to this endpoint
92 api1
.connectDatacenter(dc1
)
93 api1
.connectDatacenter(dc2
)
94 api1
.connectDatacenter(dc3
)
95 api1
.connectDatacenter(dc4
)
96 # connect total network also, needed to do the chaining and monitoring
97 api1
.connectDCNetwork(net
)
98 # run API endpoint server (in another thread, don't block)
102 6. Finally we are done and can start our network (the emulator).
103 We can also enter the Mininet CLI to interactively interact
104 with our compute resources (just like in default Mininet).
105 But we can also implement fully automated experiments that
106 can be executed again and again.
110 # when the user types exit in the CLI, we stop the emulator
115 setLogLevel('info') # set Mininet loglevel
119 if __name__
== '__main__':