2 This is an example topology for the distributed cloud emulator (dcemulator).
3 (c) 2015 by Manuel Peuster <manuel.peuster@upb.de>
6 This is an example that shows how a user of the emulation tool can
7 define network topologies with multiple emulated cloud data centers.
9 The definition is done with a Python API which looks very similar to the
10 Mininet API (in fact it is a wrapper for it).
12 We only specify the topology *between* data centers not within a single
13 data center (data center internal setups or placements are not of interest,
14 we want to experiment with VNF chains deployed across multiple PoPs).
16 The original Mininet API has to be completely hidden and not be used by this
20 from mininet
.log
import setLogLevel
21 from emuvim
.dcemulator
.net
import DCNetwork
23 from emuvim
.api
.rest
.rest_api_endpoint
import RestApiEndpoint
25 from emuvim
.api
.zerorpc
.compute
import ZeroRpcApiEndpoint
26 from emuvim
.api
.zerorpc
.network
import ZeroRpcApiEndpointDCNetwork
29 logging
.basicConfig(level
=logging
.INFO
)
32 def create_topology1():
34 1. Create a data center network object (DCNetwork) with monitoring enabled
36 net
= DCNetwork(monitor
=True, enable_learning
=False)
39 1b. Add endpoint APIs for the whole DCNetwork,
40 to access and control the networking from outside.
41 e.g., to setup forwarding paths between compute
42 instances aka. VNFs (represented by Docker containers), passing through
43 different switches and datacenters of the emulated topology
45 # create monitoring api endpoint for backwards compatibility with zerorpc api
46 mon_api
= ZeroRpcApiEndpointDCNetwork("0.0.0.0", 5151)
47 mon_api
.connectDCNetwork(net
)
51 2. Add (logical) data centers to the topology
52 (each data center is one "bigswitch" in our simplified
55 dc1
= net
.addDatacenter("datacenter1")
56 dc2
= net
.addDatacenter("datacenter2")
59 3. You can add additional SDN switches for data center
60 interconnections to the network.
62 s1
= net
.addSwitch("s1")
65 4. Add links between your data centers and additional switches
66 to define you topology.
67 These links can use Mininet's features to limit bw, add delay or jitter.
75 5. We want to access and control our data centers from the outside,
76 e.g., we want to connect an orchestrator to start/stop compute
77 resources aka. VNFs (represented by Docker containers in the emulated)
79 So we need to instantiate API endpoints (e.g. a zerorpc or REST
80 interface). Depending on the endpoint implementations, we can connect
81 one or more data centers to it, which can then be controlled through
82 this API, e.g., start/stop/list compute instances.
84 # keep the old zeroRPC interface for the prometheus metric query test
85 zapi1
= ZeroRpcApiEndpoint("0.0.0.0", 4242)
86 # connect data centers to this endpoint
87 zapi1
.connectDatacenter(dc1
)
88 zapi1
.connectDatacenter(dc2
)
89 # run API endpoint server (in another thread, don't block)
92 # create a new instance of a endpoint implementation
93 # the restapi handles all compute, networking and monitoring commands in one api endpoint
94 api1
= RestApiEndpoint("0.0.0.0", 5001)
95 # connect data centers to this endpoint
96 api1
.connectDatacenter(dc1
)
97 api1
.connectDatacenter(dc2
)
98 # connect total network also, needed to do the chaining and monitoring
99 api1
.connectDCNetwork(net
)
100 # run API endpoint server (in another thread, don't block)
104 5.1. For our example, we create a second endpoint to illustrate that
105 this is supported by our design. This feature allows us to have
106 one API endpoint for each data center. This makes the emulation
107 environment more realistic because you can easily create one
108 OpenStack-like REST API endpoint for *each* data center.
109 This will look like a real-world multi PoP/data center deployment
110 from the perspective of an orchestrator.
112 #zapi2 = ZeroRpcApiEndpoint("0.0.0.0", 4343)
113 #zapi2.connectDatacenter(dc3)
114 #zapi2.connectDatacenter(dc4)
118 6. Finally we are done and can start our network (the emulator).
119 We can also enter the Mininet CLI to interactively interact
120 with our compute resources (just like in default Mininet).
121 But we can also implement fully automated experiments that
122 can be executed again and again.
126 # when the user types exit in the CLI, we stop the emulator
131 setLogLevel('info') # set Mininet loglevel
135 if __name__
== '__main__':