improved CLI
[osm/vim-emu.git] / emuvim / example_topology.py
1 """
2 This is an example topology for the distributed cloud emulator (dcemulator).
3 (c) 2015 by Manuel Peuster <manuel.peuster@upb.de>
4
5
6 This is an example that shows how a user of the emulation tool can
7 define network topologies with multiple emulated cloud data centers.
8
9 The definition is done with a Python API which looks very similar to the
10 Mininet API (in fact it is a wrapper for it).
11
12 We only specify the topology *between* data centers not within a single
13 data center (data center internal setups or placements are not of interest,
14 we want to experiment with VNF chains deployed across multiple PoPs).
15
16 The original Mininet API has to be completely hidden and not be used by this
17 script.
18 """
19 import logging
20 from dcemulator.net import DCNetwork
21 from api.zerorpcapi import ZeroRpcApiEndpoint
22
23 logging.basicConfig(level=logging.INFO)
24
25
26 def create_topology1():
27 """
28 1. Create a data center network object (DCNetwork)
29 """
30 net = DCNetwork()
31
32 """
33 2. Add (logical) data centers to the topology
34 (each data center is one "bigswitch" in our simplified
35 first prototype)
36 """
37 dc1 = net.addDatacenter("dc1")
38 dc2 = net.addDatacenter("dc2")
39 dc3 = net.addDatacenter("dc3")
40 dc4 = net.addDatacenter("dc4")
41
42 """
43 3. You can add additional SDN switches for data center
44 interconnections to the network.
45 """
46 s1 = net.addSwitch("s1")
47
48 """
49 4. Add links between your data centers and additional switches
50 to define you topology.
51 These links can use Mininet's features to limit bw, add delay or jitter.
52 """
53 net.addLink(dc1, dc2)
54 net.addLink("dc1", s1)
55 net.addLink(s1, "dc3")
56 net.addLink(s1, dc4)
57
58 """
59 5. We want to access and control our data centers from the outside,
60 e.g., we want to connect an orchestrator to start/stop compute
61 resources aka. VNFs (represented by Docker containers in the emulated)
62
63 So we need to instantiate API endpoints (e.g. a zerorpc or REST
64 interface). Depending on the endpoint implementations, we can connect
65 one or more data centers to it, which can then be controlled through
66 this API, e.g., start/stop/list compute instances.
67 """
68 # create a new instance of a endpoint implementation
69 zapi1 = ZeroRpcApiEndpoint("0.0.0.0", 4242)
70 # connect data centers to this endpoint
71 zapi1.connectDatacenter(dc1)
72 zapi1.connectDatacenter(dc2)
73 # run API endpoint server (in another thread, don't block)
74 zapi1.start()
75
76 """
77 5.1. For our example, we create a second endpoint to illustrate that
78 this is support by our design. This feature allows us to have
79 one API endpoint for each data center. This makes the emulation
80 environment more realistic because you can easily create one
81 OpenStack-like REST API endpoint for *each* data center.
82 This will look like a real-world multi PoP/data center deployment
83 from the perspective of an orchestrator.
84 """
85 zapi2 = ZeroRpcApiEndpoint("0.0.0.0", 4343)
86 zapi2.connectDatacenter(dc3)
87 zapi2.connectDatacenter(dc4)
88 zapi2.start()
89
90 """
91 6. Finally we are done and can start our network (the emulator).
92 We can also enter the Mininet CLI to interactively interact
93 with our compute resources (just like in default Mininet).
94 But we can also implement fully automated experiments that
95 can be executed again and again.
96 """
97 net.start()
98 net.CLI()
99 # when the user types exit in the CLI, we stop the emulator
100 net.stop()
101
102
103 def main():
104 create_topology1()
105
106
107 if __name__ == '__main__':
108 main()