Merge pull request #131 from stevenvanrossem/master
[osm/vim-emu.git] / src / emuvim / examples / monitoring_demo_topology.py
1 """
2 This is an example topology for the distributed cloud emulator (dcemulator).
3 (c) 2015 by Manuel Peuster <manuel.peuster@upb.de>
4
5
6 This is an example that shows how a user of the emulation tool can
7 define network topologies with multiple emulated cloud data centers.
8
9 The definition is done with a Python API which looks very similar to the
10 Mininet API (in fact it is a wrapper for it).
11
12 We only specify the topology *between* data centers not within a single
13 data center (data center internal setups or placements are not of interest,
14 we want to experiment with VNF chains deployed across multiple PoPs).
15
16 The original Mininet API has to be completely hidden and not be used by this
17 script.
18 """
19 import logging
20 from mininet.log import setLogLevel
21 from emuvim.dcemulator.net import DCNetwork
22 from emuvim.api.zerorpc.compute import ZeroRpcApiEndpoint
23 from emuvim.api.zerorpc.network import ZeroRpcApiEndpointDCNetwork
24
25 logging.basicConfig(level=logging.INFO)
26
27
28 def create_topology1():
29 """
30 1. Create a data center network object (DCNetwork) with monitoring enabled
31 """
32 net = DCNetwork(monitor=True, enable_learning=False)
33
34 """
35 1b. Add endpoint APIs for the whole DCNetwork,
36 to access and control the networking from outside.
37 e.g., to setup forwarding paths between compute
38 instances aka. VNFs (represented by Docker containers), passing through
39 different switches and datacenters of the emulated topology
40 """
41 mon_api = ZeroRpcApiEndpointDCNetwork("0.0.0.0", 5151)
42 mon_api.connectDCNetwork(net)
43 mon_api.start()
44
45 """
46 2. Add (logical) data centers to the topology
47 (each data center is one "bigswitch" in our simplified
48 first prototype)
49 """
50 dc1 = net.addDatacenter("datacenter1")
51 dc2 = net.addDatacenter("datacenter2")
52 #dc3 = net.addDatacenter("long_data_center_name3")
53 #dc4 = net.addDatacenter(
54 # "datacenter4",
55 # metadata={"mydata": "we can also add arbitrary metadata to each DC"})
56
57 """
58 3. You can add additional SDN switches for data center
59 interconnections to the network.
60 """
61 s1 = net.addSwitch("s1")
62
63 """
64 4. Add links between your data centers and additional switches
65 to define you topology.
66 These links can use Mininet's features to limit bw, add delay or jitter.
67 """
68 #net.addLink(dc1, dc2, delay="10ms")
69 #net.addLink(dc1, dc2)
70 net.addLink(dc1, s1)
71 net.addLink(s1, dc2)
72 #net.addLink("datacenter1", s1, delay="20ms")
73 #net.addLink(s1, dc3)
74 #net.addLink(s1, "datacenter4")
75
76
77 """
78 5. We want to access and control our data centers from the outside,
79 e.g., we want to connect an orchestrator to start/stop compute
80 resources aka. VNFs (represented by Docker containers in the emulated)
81
82 So we need to instantiate API endpoints (e.g. a zerorpc or REST
83 interface). Depending on the endpoint implementations, we can connect
84 one or more data centers to it, which can then be controlled through
85 this API, e.g., start/stop/list compute instances.
86 """
87 # create a new instance of a endpoint implementation
88 zapi1 = ZeroRpcApiEndpoint("0.0.0.0", 4242)
89 # connect data centers to this endpoint
90 zapi1.connectDatacenter(dc1)
91 zapi1.connectDatacenter(dc2)
92 #zapi1.connectDatacenter(dc3)
93 #zapi1.connectDatacenter(dc4)
94 # run API endpoint server (in another thread, don't block)
95 zapi1.start()
96
97 """
98 5.1. For our example, we create a second endpoint to illustrate that
99 this is supported by our design. This feature allows us to have
100 one API endpoint for each data center. This makes the emulation
101 environment more realistic because you can easily create one
102 OpenStack-like REST API endpoint for *each* data center.
103 This will look like a real-world multi PoP/data center deployment
104 from the perspective of an orchestrator.
105 """
106 #zapi2 = ZeroRpcApiEndpoint("0.0.0.0", 4343)
107 #zapi2.connectDatacenter(dc3)
108 #zapi2.connectDatacenter(dc4)
109 #zapi2.start()
110
111 """
112 6. Finally we are done and can start our network (the emulator).
113 We can also enter the Mininet CLI to interactively interact
114 with our compute resources (just like in default Mininet).
115 But we can also implement fully automated experiments that
116 can be executed again and again.
117 """
118 net.start()
119 net.CLI()
120 # when the user types exit in the CLI, we stop the emulator
121 net.stop()
122
123
124 def main():
125 setLogLevel('info') # set Mininet loglevel
126 create_topology1()
127
128
129 if __name__ == '__main__':
130 main()