merge master
authorstevenvanrossem <steven.vanrossem@intec.ugent.be>
Wed, 11 May 2016 21:03:35 +0000 (23:03 +0200)
committerstevenvanrossem <steven.vanrossem@intec.ugent.be>
Wed, 11 May 2016 21:03:35 +0000 (23:03 +0200)
22 files changed:
Dockerfile [new symlink]
README.md
ansible/install.yml
setup.py
src/emuvim/api/sonata/dummygatekeeper.py
src/emuvim/dcemulator/net.py
src/emuvim/dcemulator/node.py
src/emuvim/test/__main__.py [deleted file]
src/emuvim/test/base.py
src/emuvim/test/integrationtests/__init__.py [new file with mode: 0644]
src/emuvim/test/runner.py [deleted file]
src/emuvim/test/test_api_zerorpc.py [deleted file]
src/emuvim/test/test_emulator.py [deleted file]
src/emuvim/test/test_resourcemodel.py [deleted file]
src/emuvim/test/test_sonata_dummy_gatekeeper.py [deleted file]
src/emuvim/test/unittests/__init__.py [new file with mode: 0644]
src/emuvim/test/unittests/test_emulator.py [new file with mode: 0755]
src/emuvim/test/unittests/test_resourcemodel.py [new file with mode: 0644]
src/emuvim/test/unittests/test_sonata_dummy_gatekeeper.py [new file with mode: 0644]
utils/ci/build_01_unit_tests.sh
utils/docker/Dockerfile
utils/docker/entrypoint.sh

diff --git a/Dockerfile b/Dockerfile
new file mode 120000 (symlink)
index 0000000..a0f6cc3
--- /dev/null
@@ -0,0 +1 @@
+utils/docker/Dockerfile
\ No newline at end of file
index 1042a7f..9b0038a 100755 (executable)
--- a/README.md
+++ b/README.md
@@ -2,29 +2,35 @@
 
 # Distributed Cloud Emulator
 
-## Lead Developers
+### Lead Developers
 The following lead developers are responsible for this repository and have admin rights. They can, for example, merge pull requests.
 
-
 * Manuel Peuster (mpeuster)
 * Steven Van Rossem (stevenvanrossem)
 
+### Environment
+* Python 2.7
+* Latest [Containernet](https://github.com/mpeuster/containernet) installed on the system
+
 ### Dependencies
-* needs the latest [Dockernet](https://github.com/mpeuster/dockernet) to be installed on the system
-* pyaml
-* zerorpc
-* tabulate
-* argparse
-* networkx
-* six>=1.9
-* ryu
-* oslo.config
-* pytest
-* pytest-runner
-* Flask
-* flask_restful
-* requests 
-* docker-py
+* pyaml (public domain)
+* zerorpc (MIT)
+* tabulate (public domain)
+* argparse (Python software foundation license)
+* networkx (BSD)
+* six>=1.9 (MIT)
+* ryu (Apache 2.0)
+* oslo.config (Apache 2.0)
+* pytest (MIT)
+* pytest-runner (MIT)
+* Flask (BSD)
+* flask_restful (BSD)
+* requests  (Apache 2.0)
+* docker-py (Apache 2.0)
+* paramiko (LGPL)
+
+### 3rd-party code used
+* (none)
 
 
 ### Project structure
@@ -32,7 +38,7 @@ The following lead developers are responsible for this repository and have admin
 * **src/emuvim/** all emulator code 
  * **api/** Data center API endpoint implementations (zerorpc, OpenStack REST, ...)
  * **cli/** CLI client to interact with a running emulator
- * **dcemulator/** Dockernet wrapper that introduces the notion of data centers and API endpoints
+ * **dcemulator/** Containernet wrapper that introduces the notion of data centers and API endpoints
  * **examples/** Example topology scripts
  * **test/** Unit tests
 * **ansible/** Ansible install scripts
@@ -46,10 +52,10 @@ Automatic installation is provide through Ansible playbooks.
 * `sudo vim /etc/ansible/hosts`
 * Add: `localhost ansible_connection=local`
 
-#### 1. Dockernet
+#### 1. Containernet
 * `cd`
-* `git clone -b dockernet-sonata https://github.com/mpeuster/dockernet.git`
-* `cd ~/dockernet/ansible`
+* `git clone https://github.com/mpeuster/containernet.git`
+* `cd ~/containernet/ansible`
 * `sudo ansible-playbook install.yml`
 * Wait (and have a coffee) ...
 
@@ -77,14 +83,13 @@ In the `~/son-emu` directory:
  * `son-emu-cli compute start -d datacenter1 -n vnf2`
  * `son-emu-cli compute list`
 * First terminal:
- * `dockernet> vnf1 ping -c 2 vnf2`
+ * `containernet> vnf1 ping -c 2 vnf2`
 * Second terminal:
  *  `son-emu-cli monitor get_rate -vnf vnf1`
 
 ### Run Unit Tests
 * `cd ~/son-emu`
-* `sudo py.test -v src/emuvim` (equivalent to `python setup.py test -v --addopts 'src/emuvim'` but with direct access to the commandline arguments)
+* `sudo py.test -v src/emuvim/test/unittests`
 
 ### CLI
 * [Full CLI command documentation](https://github.com/sonata-nfv/son-emu/wiki/CLI-Command-Overview)
-
index b31615e..7fed451 100755 (executable)
    - name: install libzmq-dev
      apt: pkg=libzmq-dev state=installed
 
+   - name: install libffi-dev
+     apt: pkg=libffi-dev state=installed
+
+   - name: install libssl-dev
+     apt: pkg=libssl-dev state=installed
+
    - name: install pip
      apt: pkg=python-pip state=installed
 
      pip: name=requests state=latest
 
    - name: install docker-py
-     pip: name=docker-py state=latest
+     pip: name=docker-py version=1.7.1
 
    - name: install prometheus_client
      pip: name=prometheus_client state=latest
 
+   - name: install paramiko
+     pip: name=paramiko state=latest
+
+   - name: install latest urllib3 (fix error urllib3.connection.match_hostname = match_hostname)
+     pip: name=urllib3 state=latest
+
 
 
index 367c4fb..3657816 100644 (file)
--- a/setup.py
+++ b/setup.py
@@ -2,7 +2,7 @@ from setuptools import setup, find_packages
 
 setup(name='emuvim',
       version='0.0.1',
-      license='TODO',
+      license='Apache 2.0',
       description='emuvim is a VIM for the SONATA platform',
       url='http://github.com/sonata-emu',
       author_email='sonata-dev@sonata-nfv.eu',
@@ -21,9 +21,11 @@ setup(name='emuvim',
           'pytest',
           'Flask',
           'flask_restful',
-          'docker-py',
+          'docker-py==1.7.1',
           'requests',
-         'prometheus_client'
+          'prometheus_client',
+          'paramiko',
+          'urllib3'
       ],
       zip_safe=False,
       entry_points={
index 2047ff8..8423a31 100755 (executable)
@@ -134,8 +134,9 @@ class Service(object):
             src_node, src_port = link["connection_points_reference"][0].split(":")
             dst_node, dst_port = link["connection_points_reference"][1].split(":")
 
-            network = self.vnfds[src_node].get("dc").net  # there should be a cleaner way to find the DCNetwork
-            network.setChain(src_node, dst_node, vnf_src_interface=src_port, vnf_dst_interface=dst_port)
+            if src_node in self.vnfds:
+                network = self.vnfds[src_node].get("dc").net  # there should be a cleaner way to find the DCNetwork
+                network.setChain(src_node, dst_node, vnf_src_interface=src_port, vnf_dst_interface=dst_port)
 
         LOG.info("Service started. Instance id: %r" % instance_uuid)
         return instance_uuid
index 9ca75f7..115b9e5 100755 (executable)
@@ -12,7 +12,7 @@ import re
 import urllib2
 from functools import partial
 
-from mininet.net import Dockernet
+from mininet.net import Containernet
 from mininet.node import Controller, DefaultController, OVSSwitch, OVSKernelSwitch, Docker, RemoteController
 from mininet.cli import CLI
 from mininet.link import TCLink
@@ -21,9 +21,9 @@ from emuvim.dcemulator.monitoring import DCNetworkMonitor
 from emuvim.dcemulator.node import Datacenter, EmulatorCompute
 from emuvim.dcemulator.resourcemodel import ResourceModelRegistrar
 
-class DCNetwork(Dockernet):
+class DCNetwork(Containernet):
     """
-    Wraps the original Mininet/Dockernet class and provides
+    Wraps the original Mininet/Containernet class and provides
     methods to add data centers, switches, etc.
 
     This class is used by topology definition scripts.
@@ -35,7 +35,7 @@ class DCNetwork(Dockernet):
                  dc_emulation_max_mem=512,  # emulation max mem in MB
                  **kwargs):
         """
-        Create an extended version of a Dockernet network
+        Create an extended version of a Containernet network
         :param dc_emulation_max_cpu: max. CPU time used by containers in data centers
         :param kwargs: path through for Mininet parameters
         :return:
@@ -43,9 +43,10 @@ class DCNetwork(Dockernet):
         self.dcs = {}
 
         # call original Docker.__init__ and setup default controller
-        Dockernet.__init__(
+        Containernet.__init__(
             self, switch=OVSKernelSwitch, controller=controller, **kwargs)
 
+
         # Ryu management
         self.ryu_process = None
         if controller == RemoteController:
@@ -122,11 +123,11 @@ class DCNetwork(Dockernet):
                 params["params2"]["ip"] = self.getNextIp()
         # ensure that we allow TCLinks between data centers
         # TODO this is not optimal, we use cls=Link for containers and TCLink for data centers
-        # see Dockernet issue: https://github.com/mpeuster/dockernet/issues/3
+        # see Containernet issue: https://github.com/mpeuster/containernet/issues/3
         if "cls" not in params:
             params["cls"] = TCLink
 
-        link = Dockernet.addLink(self, node1, node2, **params)
+        link = Containernet.addLink(self, node1, node2, **params)
 
         # try to give container interfaces a default id
         node1_port_id = node1.ports[link.intf1]
@@ -144,7 +145,7 @@ class DCNetwork(Dockernet):
 
         # add edge and assigned port number to graph in both directions between node1 and node2
         # port_id: id given in descriptor (if available, otherwise same as port)
-        # port: portnumber assigned by Dockernet
+        # port: portnumber assigned by Containernet
 
         attr_dict = {}
         # possible weight metrics allowed by TClink class:
@@ -181,14 +182,14 @@ class DCNetwork(Dockernet):
         Wrapper for addDocker method to use custom container class.
         """
         self.DCNetwork_graph.add_node(label)
-        return Dockernet.addDocker(self, label, cls=EmulatorCompute, **params)
+        return Containernet.addDocker(self, label, cls=EmulatorCompute, **params)
 
     def removeDocker( self, label, **params ):
         """
         Wrapper for removeDocker method to update graph.
         """
         self.DCNetwork_graph.remove_node(label)
-        return Dockernet.removeDocker(self, label, **params)
+        return Containernet.removeDocker(self, label, **params)
 
     def addSwitch( self, name, add_to_graph=True, **params ):
         """
@@ -196,7 +197,7 @@ class DCNetwork(Dockernet):
         """
         if add_to_graph:
             self.DCNetwork_graph.add_node(name)
-        return Dockernet.addSwitch(self, name, protocols='OpenFlow10,OpenFlow12,OpenFlow13', **params)
+        return Containernet.addSwitch(self, name, protocols='OpenFlow10,OpenFlow12,OpenFlow13', **params)
 
     def getAllContainers(self):
         """
@@ -211,7 +212,7 @@ class DCNetwork(Dockernet):
         # start
         for dc in self.dcs.itervalues():
             dc.start()
-        Dockernet.start(self)
+        Containernet.start(self)
 
     def stop(self):
 
@@ -220,7 +221,7 @@ class DCNetwork(Dockernet):
             self.monitor_agent.stop()
 
         # stop emulator net
-        Dockernet.stop(self)
+        Containernet.stop(self)
 
         # stop Ryu controller
         self.stopRyu()
index 3258a9f..f9328e3 100755 (executable)
@@ -19,7 +19,7 @@ DCDPID_BASE = 1000  # start of switch dpid's used for data center switches
 class EmulatorCompute(Docker):
     """
     Emulator specific compute node class.
-    Inherits from Dockernet's Docker host class.
+    Inherits from Containernet's Docker host class.
     Represents a single container connected to a (logical)
     data center.
     We can add emulator specific helper functions to it.
@@ -168,7 +168,7 @@ class Datacenter(object):
         # if no --net option is given, network = [{}], so 1 empty dict in the list
         # this results in 1 default interface with a default ip address
         for nw in network:
-            # TODO we cannot use TCLink here (see: https://github.com/mpeuster/dockernet/issues/3)
+            # TODO we cannot use TCLink here (see: https://github.com/mpeuster/containernet/issues/3)
             self.net.addLink(d, self.switch, params1=nw, cls=Link, intfName1=nw.get('id'))
         # do bookkeeping
         self.containers[name] = d
diff --git a/src/emuvim/test/__main__.py b/src/emuvim/test/__main__.py
deleted file mode 100755 (executable)
index f7fa66d..0000000
+++ /dev/null
@@ -1,7 +0,0 @@
-import runner
-import os
-
-
-if __name__ == '__main__':
-    thisdir = os.path.dirname( os.path.realpath( __file__ ) )
-    runner.main(thisdir)
index 9efb4ab..2021355 100644 (file)
@@ -74,11 +74,11 @@ class SimpleTestTopology(unittest.TestCase):
                 base_url='unix://var/run/docker.sock')
         return self.docker_cli
 
-    def getDockernetContainers(self):
+    def getContainernetContainers(self):
         """
-        List the containers managed by dockernet
+        List the containers managed by containernet
         """
-        return self.getDockerCli().containers(filters={"label": "com.dockernet"})
+        return self.getDockerCli().containers(filters={"label": "com.containernet"})
 
     @staticmethod
     def setUp():
@@ -90,7 +90,7 @@ class SimpleTestTopology(unittest.TestCase):
         # make sure that all pending docker containers are killed
         with open(os.devnull, 'w') as devnull:
             subprocess.call(
-                "sudo docker rm -f $(sudo docker ps --filter 'label=com.dockernet' -a -q)",
+                "sudo docker rm -f $(sudo docker ps --filter 'label=com.containernet' -a -q)",
                 stdout=devnull,
                 stderr=devnull,
                 shell=True)
\ No newline at end of file
diff --git a/src/emuvim/test/integrationtests/__init__.py b/src/emuvim/test/integrationtests/__init__.py
new file mode 100644 (file)
index 0000000..e69de29
diff --git a/src/emuvim/test/runner.py b/src/emuvim/test/runner.py
deleted file mode 100755 (executable)
index 469a99e..0000000
+++ /dev/null
@@ -1,52 +0,0 @@
-#!/usr/bin/env python
-
-"""
-Run all tests
- -v : verbose output
- -e : emulator test only (no API tests)
- -a : API tests only
-"""
-
-from unittest import defaultTestLoader, TextTestRunner, TestSuite
-import os
-import sys
-from mininet.util import ensureRoot
-from mininet.clean import cleanup
-from mininet.log import setLogLevel
-
-
-def runTests( testDir, verbosity=1, emuonly=False, apionly=False ):
-    "discover and run all tests in testDir"
-    # ensure inport paths work
-    sys.path.append("%s/.." % testDir)
-    # ensure root and cleanup before starting tests
-    ensureRoot()
-    cleanup()
-    # discover all tests in testDir
-    testSuite = defaultTestLoader.discover( testDir )
-    if emuonly:
-        testSuiteFiltered = [s for s in testSuite if "Emulator" in str(s)]
-        testSuite = TestSuite()
-        testSuite.addTests(testSuiteFiltered)
-    if apionly:
-        testSuiteFiltered = [s for s in testSuite if "Api" in str(s)]
-        testSuite = TestSuite()
-        testSuite.addTests(testSuiteFiltered)
-
-    # run tests
-    TextTestRunner( verbosity=verbosity ).run( testSuite )
-
-
-def main(thisdir):
-    setLogLevel( 'warning' )
-    # get the directory containing example tests
-    vlevel = 2 if '-v' in sys.argv else 1
-    emuonly = ('-e' in sys.argv)
-    apionly = ('-a' in sys.argv)
-    runTests(
-        testDir=thisdir, verbosity=vlevel, emuonly=emuonly, apionly=apionly)
-
-
-if __name__ == '__main__':
-    thisdir = os.path.dirname( os.path.realpath( __file__ ) )
-    main(thisdir)
diff --git a/src/emuvim/test/test_api_zerorpc.py b/src/emuvim/test/test_api_zerorpc.py
deleted file mode 100755 (executable)
index 2830872..0000000
+++ /dev/null
@@ -1 +0,0 @@
-#TODO we'll need this at some time. But I'am lazy. A good REST API seems to be more important.
diff --git a/src/emuvim/test/test_emulator.py b/src/emuvim/test/test_emulator.py
deleted file mode 100755 (executable)
index 0c387bf..0000000
+++ /dev/null
@@ -1,319 +0,0 @@
-"""
-Test suite to automatically test emulator functionalities.
-Directly interacts with the emulator through the Mininet-like
-Python API.
-
-Does not test API endpoints. This is done in separated test suites.
-"""
-
-import time
-import unittest
-from emuvim.dcemulator.node import EmulatorCompute
-from emuvim.test.base import SimpleTestTopology
-from mininet.node import RemoteController
-
-
-#@unittest.skip("disabled topology tests for development")
-class testEmulatorTopology( SimpleTestTopology ):
-    """
-    Tests to check the topology API of the emulator.
-    """
-
-    def testSingleDatacenter(self):
-        """
-        Create a single data center and add check if its switch is up
-        by using manually added hosts. Tests especially the
-        data center specific addLink method.
-        """
-        # create network
-        self.createNet(nswitches=0, ndatacenter=1, nhosts=2, ndockers=0)
-        # setup links
-        self.net.addLink(self.dc[0], self.h[0])
-        self.net.addLink(self.h[1], self.dc[0])
-        # start Mininet network
-        self.startNet()
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 0)
-        self.assertTrue(len(self.net.hosts) == 2)
-        self.assertTrue(len(self.net.switches) == 1)
-        # check connectivity by using ping
-        self.assertTrue(self.net.ping([self.h[0], self.h[1]]) <= 0.0)
-        # stop Mininet network
-        self.stopNet()
-
-    #@unittest.skip("disabled to test if CI fails because this is the first test.")
-    def testMultipleDatacenterDirect(self):
-        """
-        Create a two data centers and interconnect them.
-        """
-        # create network
-        self.createNet(nswitches=0, ndatacenter=2, nhosts=2, ndockers=0)
-        # setup links
-        self.net.addLink(self.dc[0], self.h[0])
-        self.net.addLink(self.h[1], self.dc[1])
-        self.net.addLink(self.dc[0], self.dc[1])
-        # start Mininet network
-        self.startNet()
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 0)
-        self.assertTrue(len(self.net.hosts) == 2)
-        self.assertTrue(len(self.net.switches) == 2)
-        # check connectivity by using ping
-        self.assertTrue(self.net.ping([self.h[0], self.h[1]]) <= 0.0)
-        # stop Mininet network
-        self.stopNet()
-
-    def testMultipleDatacenterWithIntermediateSwitches(self):
-        """
-        Create a two data centers and interconnect them with additional
-        switches between them.
-        """
-        # create network
-        self.createNet(
-            nswitches=3, ndatacenter=2, nhosts=2, ndockers=0,
-            autolinkswitches=True)
-        # setup links
-        self.net.addLink(self.dc[0], self.h[0])
-        self.net.addLink(self.h[1], self.dc[1])
-        self.net.addLink(self.dc[0], self.s[0])
-        self.net.addLink(self.s[2], self.dc[1])
-        # start Mininet network
-        self.startNet()
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 0)
-        self.assertTrue(len(self.net.hosts) == 2)
-        self.assertTrue(len(self.net.switches) == 5)
-        # check connectivity by using ping
-        self.assertTrue(self.net.ping([self.h[0], self.h[1]]) <= 0.0)
-        # stop Mininet network
-        self.stopNet()
-
-class testEmulatorNetworking( SimpleTestTopology ):
-
-    def testSDNChaining(self):
-        """
-        Create a two data centers and interconnect them with additional
-        switches between them.
-        Uses Ryu SDN controller.
-        Connect the Docker hosts to different datacenters and setup the links between.
-        """
-        # create network
-        self.createNet(
-            nswitches=3, ndatacenter=2, nhosts=0, ndockers=0,
-            autolinkswitches=True,
-            controller=RemoteController,
-            enable_learning=False)
-        # setup links
-        self.net.addLink(self.dc[0], self.s[0])
-        self.net.addLink(self.s[2], self.dc[1])
-        # start Mininet network
-        self.startNet()
-
-        # add compute resources
-        vnf1 = self.dc[0].startCompute("vnf1", network=[{'id':'intf1', 'ip':'10.0.10.1/24'}])
-        vnf2 = self.dc[1].startCompute("vnf2", network=[{'id':'intf2', 'ip':'10.0.10.2/24'}])
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 2)
-        self.assertTrue(len(self.net.hosts) == 2)
-        self.assertTrue(len(self.net.switches) == 5)
-        # check status
-        # check get status
-        s1 = self.dc[0].containers.get("vnf1").getStatus()
-        self.assertTrue(s1["name"] == "vnf1")
-        self.assertTrue(s1["state"]["Running"])
-        self.assertTrue(s1["network"][0]['intf_name'] == 'intf1')
-        self.assertTrue(s1["network"][0]['ip'] == '10.0.10.1')
-
-        s2 = self.dc[1].containers.get("vnf2").getStatus()
-        self.assertTrue(s2["name"] == "vnf2")
-        self.assertTrue(s2["state"]["Running"])
-        self.assertTrue(s2["network"][0]['intf_name'] == 'intf2')
-        self.assertTrue(s2["network"][0]['ip'] == '10.0.10.2')
-
-        # setup links
-        self.net.setChain('vnf1', 'vnf2', 'intf1', 'intf2', bidirectional=True, cmd='add-flow')
-        # check connectivity by using ping
-        self.assertTrue(self.net.ping([vnf1, vnf2]) <= 0.0)
-        # stop Mininet network
-        self.stopNet()
-
-#@unittest.skip("disabled compute tests for development")
-class testEmulatorCompute( SimpleTestTopology ):
-    """
-    Tests to check the emulator's API to add and remove
-    compute resources at runtime.
-    """
-
-    def testAddSingleComputeSingleDC(self):
-        """
-        Adds a single compute instance to
-        a single DC and checks its connectivity with a
-        manually added host.
-        """
-        # create network
-        self.createNet(nswitches=0, ndatacenter=1, nhosts=1, ndockers=0)
-        # setup links
-        self.net.addLink(self.dc[0], self.h[0])
-        # start Mininet network
-        self.startNet()
-        # add compute resources
-        vnf1 = self.dc[0].startCompute("vnf1")
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 1)
-        self.assertTrue(len(self.net.hosts) == 2)
-        self.assertTrue(len(self.net.switches) == 1)
-        # check compute list result
-        self.assertTrue(len(self.dc[0].listCompute()) == 1)
-        self.assertTrue(isinstance(self.dc[0].listCompute()[0], EmulatorCompute))
-        self.assertTrue(self.dc[0].listCompute()[0].name == "vnf1")
-        # check connectivity by using ping
-        self.assertTrue(self.net.ping([self.h[0], vnf1]) <= 0.0)
-        # stop Mininet network
-        self.stopNet()
-
-    def testRemoveSingleComputeSingleDC(self):
-        """
-        Test stop method for compute instances.
-        Check that the instance is really removed.
-        """
-        # create network
-        self.createNet(nswitches=0, ndatacenter=1, nhosts=1, ndockers=0)
-        # setup links
-        self.net.addLink(self.dc[0], self.h[0])
-        # start Mininet network
-        self.startNet()
-        # add compute resources
-        vnf1 = self.dc[0].startCompute("vnf1")
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 1)
-        self.assertTrue(len(self.net.hosts) == 2)
-        self.assertTrue(len(self.net.switches) == 1)
-        # check compute list result
-        self.assertTrue(len(self.dc[0].listCompute()) == 1)
-        # check connectivity by using ping
-        self.assertTrue(self.net.ping([self.h[0], vnf1]) <= 0.0)
-        # remove compute resources
-        self.dc[0].stopCompute("vnf1")
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 0)
-        self.assertTrue(len(self.net.hosts) == 1)
-        self.assertTrue(len(self.net.switches) == 1)
-        # check compute list result
-        self.assertTrue(len(self.dc[0].listCompute()) == 0)
-        # stop Mininet network
-        self.stopNet()
-
-    def testGetStatusSingleComputeSingleDC(self):
-        """
-        Check if the getStatus functionality of EmulatorCompute
-        objects works well.
-        """
-        # create network
-        self.createNet(nswitches=0, ndatacenter=1, nhosts=1, ndockers=0)
-        # setup links
-        self.net.addLink(self.dc[0], self.h[0])
-        # start Mininet network
-        self.startNet()
-        # add compute resources
-        vnf1 = self.dc[0].startCompute("vnf1")
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 1)
-        self.assertTrue(len(self.net.hosts) == 2)
-        self.assertTrue(len(self.net.switches) == 1)
-        # check compute list result
-        self.assertTrue(len(self.dc[0].listCompute()) == 1)
-        self.assertTrue(isinstance(self.dc[0].listCompute()[0], EmulatorCompute))
-        self.assertTrue(self.dc[0].listCompute()[0].name == "vnf1")
-        # check connectivity by using ping
-        self.assertTrue(self.net.ping([self.h[0], vnf1]) <= 0.0)
-        # check get status
-        s = self.dc[0].containers.get("vnf1").getStatus()
-        self.assertTrue(s["name"] == "vnf1")
-        self.assertTrue(s["state"]["Running"])
-        # stop Mininet network
-        self.stopNet()
-
-    def testConnectivityMultiDC(self):
-        """
-        Test if compute instances started in different data centers
-        are able to talk to each other.
-        """
-        # create network
-        self.createNet(
-            nswitches=3, ndatacenter=2, nhosts=0, ndockers=0,
-            autolinkswitches=True)
-        # setup links
-        self.net.addLink(self.dc[0], self.s[0])
-        self.net.addLink(self.dc[1], self.s[2])
-        # start Mininet network
-        self.startNet()
-        # add compute resources
-        vnf1 = self.dc[0].startCompute("vnf1")
-        vnf2 = self.dc[1].startCompute("vnf2")
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 2)
-        self.assertTrue(len(self.net.hosts) == 2)
-        self.assertTrue(len(self.net.switches) == 5)
-        # check compute list result
-        self.assertTrue(len(self.dc[0].listCompute()) == 1)
-        self.assertTrue(len(self.dc[1].listCompute()) == 1)
-        # check connectivity by using ping
-        self.assertTrue(self.net.ping([vnf1, vnf2]) <= 0.0)
-        # stop Mininet network
-        self.stopNet()
-
-    def testInterleavedAddRemoveMultiDC(self):
-        """
-        Test multiple, interleaved add and remove operations and ensure
-        that always all expected compute instances are reachable.
-        """
-                # create network
-        self.createNet(
-            nswitches=3, ndatacenter=2, nhosts=0, ndockers=0,
-            autolinkswitches=True)
-        # setup links
-        self.net.addLink(self.dc[0], self.s[0])
-        self.net.addLink(self.dc[1], self.s[2])
-        # start Mininet network
-        self.startNet()
-        # add compute resources
-        vnf1 = self.dc[0].startCompute("vnf1")
-        vnf2 = self.dc[1].startCompute("vnf2")
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 2)
-        self.assertTrue(len(self.net.hosts) == 2)
-        self.assertTrue(len(self.net.switches) == 5)
-        # check compute list result
-        self.assertTrue(len(self.dc[0].listCompute()) == 1)
-        self.assertTrue(len(self.dc[1].listCompute()) == 1)
-        # check connectivity by using ping
-        self.assertTrue(self.net.ping([vnf1, vnf2]) <= 0.0)
-        # remove compute resources
-        self.dc[0].stopCompute("vnf1")
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 1)
-        self.assertTrue(len(self.net.hosts) == 1)
-        self.assertTrue(len(self.net.switches) == 5)
-        # check compute list result
-        self.assertTrue(len(self.dc[0].listCompute()) == 0)
-        self.assertTrue(len(self.dc[1].listCompute()) == 1)
-        # add compute resources
-        vnf3 = self.dc[0].startCompute("vnf3")
-        vnf4 = self.dc[0].startCompute("vnf4")
-        # check compute list result
-        self.assertTrue(len(self.dc[0].listCompute()) == 2)
-        self.assertTrue(len(self.dc[1].listCompute()) == 1)
-        self.assertTrue(self.net.ping([vnf3, vnf2]) <= 0.0)
-        self.assertTrue(self.net.ping([vnf4, vnf2]) <= 0.0)
-        # remove compute resources
-        self.dc[0].stopCompute("vnf3")
-        self.dc[0].stopCompute("vnf4")
-        self.dc[1].stopCompute("vnf2")
-        # check compute list result
-        self.assertTrue(len(self.dc[0].listCompute()) == 0)
-        self.assertTrue(len(self.dc[1].listCompute()) == 0)
-        # stop Mininet network
-        self.stopNet()
-
-if __name__ == '__main__':
-    unittest.main()
diff --git a/src/emuvim/test/test_resourcemodel.py b/src/emuvim/test/test_resourcemodel.py
deleted file mode 100644 (file)
index 1817a25..0000000
+++ /dev/null
@@ -1,339 +0,0 @@
-import time
-import os
-from emuvim.test.base import SimpleTestTopology
-from emuvim.dcemulator.resourcemodel import BaseResourceModel, ResourceFlavor, NotEnoughResourcesAvailable, ResourceModelRegistrar
-from emuvim.dcemulator.resourcemodel.upb.simple import UpbSimpleCloudDcRM, UpbOverprovisioningCloudDcRM, UpbDummyRM
-
-
-
-class testResourceModel(SimpleTestTopology):
-    """
-    Test the general resource model API and functionality.
-    """
-
-    def testBaseResourceModelApi(self):
-        """
-        Tast bare API without real resource madel.
-        :return:
-        """
-        r = BaseResourceModel()
-        # check if default flavors are there
-        self.assertTrue(len(r._flavors) == 5)
-        # check addFlavor functionality
-        f = ResourceFlavor("test", {"testmetric": 42})
-        r.addFlavour(f)
-        self.assertTrue("test" in r._flavors)
-        self.assertTrue(r._flavors.get("test").get("testmetric") == 42)
-
-    def testAddRmToDc(self):
-        """
-        Test is allocate/free is called when a RM is added to a DC.
-        :return:
-        """
-        # create network
-        self.createNet(nswitches=0, ndatacenter=1, nhosts=2, ndockers=0)
-        # setup links
-        self.net.addLink(self.dc[0], self.h[0])
-        self.net.addLink(self.h[1], self.dc[0])
-        # add resource model
-        r = BaseResourceModel()
-        self.dc[0].assignResourceModel(r)
-        # start Mininet network
-        self.startNet()
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 0)
-        self.assertTrue(len(self.net.hosts) == 2)
-        self.assertTrue(len(self.net.switches) == 1)
-        # check resource model and resource model registrar
-        self.assertTrue(self.dc[0]._resource_model is not None)
-        self.assertTrue(len(self.net.rm_registrar.resource_models) == 1)
-
-        # check if alloc was called during startCompute
-        self.assertTrue(len(r._allocated_compute_instances) == 0)
-        self.dc[0].startCompute("tc1")
-        time.sleep(1)
-        self.assertTrue(len(r._allocated_compute_instances) == 1)
-        # check if free was called during stopCompute
-        self.dc[0].stopCompute("tc1")
-        self.assertTrue(len(r._allocated_compute_instances) == 0)
-        # check connectivity by using ping
-        self.assertTrue(self.net.ping([self.h[0], self.h[1]]) <= 0.0)
-        # stop Mininet network
-        self.stopNet()
-
-
-def createDummyContainerObject(name, flavor):
-
-    class DummyContainer(object):
-
-        def __init__(self):
-            self.cpu_period = -1
-            self.cpu_quota = -1
-            self.mem_limit = -1
-            self.memswap_limit = -1
-
-        def updateCpuLimit(self, cpu_period, cpu_quota):
-            self.cpu_period = cpu_period
-            self.cpu_quota = cpu_quota
-
-        def updateMemoryLimit(self, mem_limit):
-            self.mem_limit = mem_limit
-
-    d = DummyContainer()
-    d.name = name
-    d.flavor_name = flavor
-    return d
-
-
-
-
-class testUpbSimpleCloudDcRM(SimpleTestTopology):
-    """
-    Test the UpbSimpleCloudDc resource model.
-    """
-
-    def testAllocationComputations(self):
-        """
-        Test the allocation procedures and correct calculations.
-        :return:
-        """
-        # config
-        E_CPU = 1.0
-        MAX_CU = 100
-        E_MEM = 512
-        MAX_MU = 2048
-        # create dummy resource model environment
-        reg = ResourceModelRegistrar(dc_emulation_max_cpu=E_CPU, dc_emulation_max_mem=E_MEM)
-        rm = UpbSimpleCloudDcRM(max_cu=MAX_CU, max_mu=MAX_MU)
-        reg.register("test_dc", rm)
-
-        c1 = createDummyContainerObject("c1", flavor="tiny")
-        rm.allocate(c1)  # calculate allocation
-        self.assertEqual(float(c1.cpu_quota) / c1.cpu_period, E_CPU / MAX_CU * 0.5)   # validate compute result
-        self.assertEqual(float(c1.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 32)   # validate memory result
-
-        c2 = createDummyContainerObject("c2", flavor="small")
-        rm.allocate(c2)  # calculate allocation
-        self.assertEqual(float(c2.cpu_quota) / c2.cpu_period, E_CPU / MAX_CU * 1)   # validate compute result
-        self.assertEqual(float(c2.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 128)   # validate memory result
-
-        c3 = createDummyContainerObject("c3", flavor="medium")
-        rm.allocate(c3)  # calculate allocation
-        self.assertEqual(float(c3.cpu_quota) / c3.cpu_period, E_CPU / MAX_CU * 4)   # validate compute result
-        self.assertEqual(float(c3.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 256)   # validate memory result
-
-        c4 = createDummyContainerObject("c4", flavor="large")
-        rm.allocate(c4)  # calculate allocation
-        self.assertEqual(float(c4.cpu_quota) / c4.cpu_period, E_CPU / MAX_CU * 8)   # validate compute result
-        self.assertEqual(float(c4.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 512)   # validate memory result
-
-        c5 = createDummyContainerObject("c5", flavor="xlarge")
-        rm.allocate(c5)  # calculate allocation
-        self.assertEqual(float(c5.cpu_quota) / c5.cpu_period, E_CPU / MAX_CU * 16)   # validate compute result
-        self.assertEqual(float(c5.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 1024)   # validate memory result
-
-
-    def testAllocationCpuLimit(self):
-        """
-        Test CPU allocation limit
-        :return:
-        """
-        # config
-        E_CPU = 1.0
-        MAX_CU = 40
-        E_MEM = 512
-        MAX_MU = 4096
-        # create dummy resource model environment
-        reg = ResourceModelRegistrar(dc_emulation_max_cpu=E_CPU, dc_emulation_max_mem=E_MEM)
-        rm = UpbSimpleCloudDcRM(max_cu=MAX_CU, max_mu=MAX_MU)
-        reg.register("test_dc", rm)
-
-        # test over provisioning exeption
-        exception = False
-        try:
-            c6 = createDummyContainerObject("c6", flavor="xlarge")
-            c7 = createDummyContainerObject("c7", flavor="xlarge")
-            c8 = createDummyContainerObject("c8", flavor="xlarge")
-            c9 = createDummyContainerObject("c9", flavor="xlarge")
-            rm.allocate(c6)  # calculate allocation
-            rm.allocate(c7)  # calculate allocation
-            rm.allocate(c8)  # calculate allocation
-            rm.allocate(c9)  # calculate allocation
-        except NotEnoughResourcesAvailable as e:
-            self.assertIn("Not enough compute", e.message)
-            exception = True
-        self.assertTrue(exception)
-
-    def testAllocationMemLimit(self):
-        """
-        Test MEM allocation limit
-        :return:
-        """
-        # config
-        E_CPU = 1.0
-        MAX_CU = 500
-        E_MEM = 512
-        MAX_MU = 2048
-        # create dummy resource model environment
-        reg = ResourceModelRegistrar(dc_emulation_max_cpu=E_CPU, dc_emulation_max_mem=E_MEM)
-        rm = UpbSimpleCloudDcRM(max_cu=MAX_CU, max_mu=MAX_MU)
-        reg.register("test_dc", rm)
-
-        # test over provisioning exeption
-        exception = False
-        try:
-            c6 = createDummyContainerObject("c6", flavor="xlarge")
-            c7 = createDummyContainerObject("c7", flavor="xlarge")
-            c8 = createDummyContainerObject("c8", flavor="xlarge")
-            rm.allocate(c6)  # calculate allocation
-            rm.allocate(c7)  # calculate allocation
-            rm.allocate(c8)  # calculate allocation
-        except NotEnoughResourcesAvailable as e:
-            self.assertIn("Not enough memory", e.message)
-            exception = True
-        self.assertTrue(exception)
-
-    def testFree(self):
-        """
-        Test the free procedure.
-        :return:
-        """
-        # config
-        E_CPU = 1.0
-        MAX_CU = 100
-        # create dummy resource model environment
-        reg = ResourceModelRegistrar(dc_emulation_max_cpu=1.0, dc_emulation_max_mem=512)
-        rm = UpbSimpleCloudDcRM(max_cu=100, max_mu=100)
-        reg.register("test_dc", rm)
-        c1 = createDummyContainerObject("c6", flavor="tiny")
-        rm.allocate(c1)  # calculate allocation
-        self.assertTrue(rm.dc_alloc_cu == 0.5)
-        rm.free(c1)
-        self.assertTrue(rm.dc_alloc_cu == 0)
-
-    def testInRealTopo(self):
-        """
-        Start a real container and check if limitations are really passed down to Dockernet.
-        :return:
-        """
-        # ATTENTION: This test should only be executed if emu runs not inside a Docker container,
-        # because it manipulates cgroups.
-        if os.environ.get("SON_EMU_IN_DOCKER") is not None:
-            return
-        # create network
-        self.createNet(nswitches=0, ndatacenter=1, nhosts=2, ndockers=0)
-        # setup links
-        self.net.addLink(self.dc[0], self.h[0])
-        self.net.addLink(self.h[1], self.dc[0])
-        # add resource model
-        r = UpbSimpleCloudDcRM(max_cu=100, max_mu=100)
-        self.dc[0].assignResourceModel(r)
-        # start Mininet network
-        self.startNet()
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 0)
-        self.assertTrue(len(self.net.hosts) == 2)
-        self.assertTrue(len(self.net.switches) == 1)
-        # check resource model and resource model registrar
-        self.assertTrue(self.dc[0]._resource_model is not None)
-        self.assertTrue(len(self.net.rm_registrar.resource_models) == 1)
-
-        # check if alloc was called during startCompute
-        self.assertTrue(len(r._allocated_compute_instances) == 0)
-        tc1 = self.dc[0].startCompute("tc1", flavor_name="tiny")
-        time.sleep(1)
-        self.assertTrue(len(r._allocated_compute_instances) == 1)
-
-        # check if there is a real limitation set for containers cgroup
-        # deactivated for now, seems not to work in docker-in-docker setup used in CI
-        self.assertEqual(float(tc1.cpu_quota)/tc1.cpu_period, 0.005)
-
-        # check if free was called during stopCompute
-        self.dc[0].stopCompute("tc1")
-        self.assertTrue(len(r._allocated_compute_instances) == 0)
-        # check connectivity by using ping
-        self.assertTrue(self.net.ping([self.h[0], self.h[1]]) <= 0.0)
-        # stop Mininet network
-        self.stopNet()
-
-
-class testUpbOverprovisioningCloudDcRM(SimpleTestTopology):
-    """
-    Test the UpbOverprovisioningCloudDc resource model.
-    """
-
-    def testAllocationComputations(self):
-        """
-        Test the allocation procedures and correct calculations.
-        :return:
-        """
-        # config
-        E_CPU = 1.0
-        MAX_CU = 3
-        E_MEM = 512
-        MAX_MU = 2048
-        # create dummy resource model environment
-        reg = ResourceModelRegistrar(dc_emulation_max_cpu=E_CPU, dc_emulation_max_mem=E_MEM)
-        rm = UpbOverprovisioningCloudDcRM(max_cu=MAX_CU, max_mu=MAX_MU)
-        reg.register("test_dc", rm)
-
-        c1 = createDummyContainerObject("c1", flavor="small")
-        rm.allocate(c1)  # calculate allocation
-        self.assertAlmostEqual(float(c1.cpu_quota) / c1.cpu_period, E_CPU / MAX_CU * 1.0, places=5)
-        self.assertAlmostEqual(float(c1.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 128)
-        self.assertAlmostEqual(rm.cpu_op_factor, 1.0)
-
-        c2 = createDummyContainerObject("c2", flavor="small")
-        rm.allocate(c2)  # calculate allocation
-        self.assertAlmostEqual(float(c2.cpu_quota) / c2.cpu_period, E_CPU / MAX_CU * 1.0, places=5)
-        self.assertAlmostEqual(float(c2.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 128)
-        self.assertAlmostEqual(rm.cpu_op_factor, 1.0)
-
-        c3 = createDummyContainerObject("c3", flavor="small")
-        rm.allocate(c3)  # calculate allocation
-        self.assertAlmostEqual(float(c3.cpu_quota) / c3.cpu_period, E_CPU / MAX_CU * 1.0, places=5)
-        self.assertAlmostEqual(float(c3.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 128)
-        self.assertAlmostEqual(rm.cpu_op_factor, 1.0)
-
-        # from this container onwards, we should go to over provisioning mode:
-        c4 = createDummyContainerObject("c4", flavor="small")
-        rm.allocate(c4)  # calculate allocation
-        self.assertAlmostEqual(float(c4.cpu_quota) / c4.cpu_period, E_CPU / MAX_CU * (float(3) / 4), places=5)
-        self.assertAlmostEqual(float(c4.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 128, places=5)
-        self.assertAlmostEqual(rm.cpu_op_factor, 0.75)
-
-        c5 = createDummyContainerObject("c5", flavor="small")
-        rm.allocate(c5)  # calculate allocation
-        self.assertAlmostEqual(float(c5.cpu_quota) / c5.cpu_period, E_CPU / MAX_CU * (float(3) / 5), places=5)
-        self.assertAlmostEqual(float(c5.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 128)
-        self.assertAlmostEqual(rm.cpu_op_factor, 0.6)
-
-
-class testUpbDummyRM(SimpleTestTopology):
-    """
-    Test the UpbDummyRM resource model.
-    """
-
-    def testAllocationComputations(self):
-        """
-        Test the allocation procedures and correct calculations.
-        :return:
-        """
-        # config
-        E_CPU = 1.0
-        MAX_CU = 3
-        E_MEM = 512
-        MAX_MU = 2048
-        # create dummy resource model environment
-        reg = ResourceModelRegistrar(dc_emulation_max_cpu=E_CPU, dc_emulation_max_mem=E_MEM)
-        rm = UpbDummyRM(max_cu=MAX_CU, max_mu=MAX_MU)
-        reg.register("test_dc", rm)
-
-        c1 = createDummyContainerObject("c1", flavor="small")
-        rm.allocate(c1)  # calculate allocation
-        self.assertEqual(len(rm._allocated_compute_instances), 1)
-
-        c2 = createDummyContainerObject("c2", flavor="small")
-        rm.allocate(c2)  # calculate allocation
-        self.assertEqual(len(rm._allocated_compute_instances), 2)
-
diff --git a/src/emuvim/test/test_sonata_dummy_gatekeeper.py b/src/emuvim/test/test_sonata_dummy_gatekeeper.py
deleted file mode 100644 (file)
index db3fd92..0000000
+++ /dev/null
@@ -1,74 +0,0 @@
-import time
-import requests
-import subprocess
-import os
-import unittest
-from emuvim.test.base import SimpleTestTopology
-from emuvim.api.sonata import SonataDummyGatekeeperEndpoint
-
-
-
-class testSonataDummyGatekeeper(SimpleTestTopology):
-
-    @unittest.skip("disabled test since ubuntu:trusty not used in current example package")
-    def testAPI(self):
-        # create network
-        self.createNet(nswitches=0, ndatacenter=2, nhosts=2, ndockers=0)
-        # setup links
-        self.net.addLink(self.dc[0], self.h[0])
-        self.net.addLink(self.dc[0], self.dc[1])
-        self.net.addLink(self.h[1], self.dc[1])
-        # connect dummy GK to data centers
-        sdkg1 = SonataDummyGatekeeperEndpoint("0.0.0.0", 5000)
-        sdkg1.connectDatacenter(self.dc[0])
-        sdkg1.connectDatacenter(self.dc[1])
-        # run the dummy gatekeeper (in another thread, don't block)
-        sdkg1.start()
-        # start Mininet network
-        self.startNet()
-        time.sleep(1)
-
-        # download example from GitHub
-        print "downloading latest son-demo.son from GitHub"
-        subprocess.call(
-            ["wget",
-             "http://github.com/sonata-nfv/son-schema/blob/master/package-descriptor/examples/sonata-demo.son?raw=true",
-             "-O",
-             "son-demo.son"]
-        )
-
-        print "starting tests"
-        # board package
-        files = {"package": open("son-demo.son", "rb")}
-        r = requests.post("http://127.0.0.1:5000/packages", files=files)
-        self.assertEqual(r.status_code, 200)
-        self.assertTrue(r.json().get("service_uuid") is not None)
-        os.remove("son-demo.son")
-
-        # instantiate service
-        service_uuid = r.json().get("service_uuid")
-        r2 = requests.post("http://127.0.0.1:5000/instantiations", json={"service_uuid": service_uuid})
-        self.assertEqual(r2.status_code, 200)
-
-        # give the emulator some time to instantiate everything
-        time.sleep(2)
-
-        # check get request APIs
-        r3 = requests.get("http://127.0.0.1:5000/packages")
-        self.assertEqual(len(r3.json().get("service_uuid_list")), 1)
-        r4 = requests.get("http://127.0.0.1:5000/instantiations")
-        self.assertEqual(len(r4.json().get("service_instance_list")), 1)
-
-        # check number of running nodes
-        self.assertTrue(len(self.getDockernetContainers()) == 3)
-        self.assertTrue(len(self.net.hosts) == 5)
-        self.assertTrue(len(self.net.switches) == 2)
-        # check compute list result
-        self.assertTrue(len(self.dc[0].listCompute()) == 3)
-        # check connectivity by using ping
-        for vnf in self.dc[0].listCompute():
-            self.assertTrue(self.net.ping([self.h[0], vnf]) <= 0.0)
-        # stop Mininet network
-        self.stopNet()
-
-
diff --git a/src/emuvim/test/unittests/__init__.py b/src/emuvim/test/unittests/__init__.py
new file mode 100644 (file)
index 0000000..e69de29
diff --git a/src/emuvim/test/unittests/test_emulator.py b/src/emuvim/test/unittests/test_emulator.py
new file mode 100755 (executable)
index 0000000..e2c3b6b
--- /dev/null
@@ -0,0 +1,319 @@
+"""
+Test suite to automatically test emulator functionalities.
+Directly interacts with the emulator through the Mininet-like
+Python API.
+
+Does not test API endpoints. This is done in separated test suites.
+"""
+
+import time
+import unittest
+from emuvim.dcemulator.node import EmulatorCompute
+from emuvim.test.base import SimpleTestTopology
+from mininet.node import RemoteController
+
+
+#@unittest.skip("disabled topology tests for development")
+class testEmulatorTopology( SimpleTestTopology ):
+    """
+    Tests to check the topology API of the emulator.
+    """
+
+    def testSingleDatacenter(self):
+        """
+        Create a single data center and add check if its switch is up
+        by using manually added hosts. Tests especially the
+        data center specific addLink method.
+        """
+        # create network
+        self.createNet(nswitches=0, ndatacenter=1, nhosts=2, ndockers=0)
+        # setup links
+        self.net.addLink(self.dc[0], self.h[0])
+        self.net.addLink(self.h[1], self.dc[0])
+        # start Mininet network
+        self.startNet()
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 0)
+        self.assertTrue(len(self.net.hosts) == 2)
+        self.assertTrue(len(self.net.switches) == 1)
+        # check connectivity by using ping
+        self.assertTrue(self.net.ping([self.h[0], self.h[1]]) <= 0.0)
+        # stop Mininet network
+        self.stopNet()
+
+    #@unittest.skip("disabled to test if CI fails because this is the first test.")
+    def testMultipleDatacenterDirect(self):
+        """
+        Create a two data centers and interconnect them.
+        """
+        # create network
+        self.createNet(nswitches=0, ndatacenter=2, nhosts=2, ndockers=0)
+        # setup links
+        self.net.addLink(self.dc[0], self.h[0])
+        self.net.addLink(self.h[1], self.dc[1])
+        self.net.addLink(self.dc[0], self.dc[1])
+        # start Mininet network
+        self.startNet()
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 0)
+        self.assertTrue(len(self.net.hosts) == 2)
+        self.assertTrue(len(self.net.switches) == 2)
+        # check connectivity by using ping
+        self.assertTrue(self.net.ping([self.h[0], self.h[1]]) <= 0.0)
+        # stop Mininet network
+        self.stopNet()
+
+    def testMultipleDatacenterWithIntermediateSwitches(self):
+        """
+        Create a two data centers and interconnect them with additional
+        switches between them.
+        """
+        # create network
+        self.createNet(
+            nswitches=3, ndatacenter=2, nhosts=2, ndockers=0,
+            autolinkswitches=True)
+        # setup links
+        self.net.addLink(self.dc[0], self.h[0])
+        self.net.addLink(self.h[1], self.dc[1])
+        self.net.addLink(self.dc[0], self.s[0])
+        self.net.addLink(self.s[2], self.dc[1])
+        # start Mininet network
+        self.startNet()
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 0)
+        self.assertTrue(len(self.net.hosts) == 2)
+        self.assertTrue(len(self.net.switches) == 5)
+        # check connectivity by using ping
+        self.assertTrue(self.net.ping([self.h[0], self.h[1]]) <= 0.0)
+        # stop Mininet network
+        self.stopNet()
+
+class testEmulatorNetworking( SimpleTestTopology ):
+
+    def testSDNChaining(self):
+        """
+        Create a two data centers and interconnect them with additional
+        switches between them.
+        Uses Ryu SDN controller.
+        Connect the Docker hosts to different datacenters and setup the links between.
+        """
+        # create network
+        self.createNet(
+            nswitches=3, ndatacenter=2, nhosts=0, ndockers=0,
+            autolinkswitches=True,
+            controller=RemoteController,
+            enable_learning=False)
+        # setup links
+        self.net.addLink(self.dc[0], self.s[0])
+        self.net.addLink(self.s[2], self.dc[1])
+        # start Mininet network
+        self.startNet()
+
+        # add compute resources
+        vnf1 = self.dc[0].startCompute("vnf1", network=[{'id':'intf1', 'ip':'10.0.10.1/24'}])
+        vnf2 = self.dc[1].startCompute("vnf2", network=[{'id':'intf2', 'ip':'10.0.10.2/24'}])
+        # check number of running nodes
+        self.assertTrue(len(self.getDockernetContainers()) == 2)
+        self.assertTrue(len(self.net.hosts) == 2)
+        self.assertTrue(len(self.net.switches) == 5)
+        # check status
+        # check get status
+        s1 = self.dc[0].containers.get("vnf1").getStatus()
+        self.assertTrue(s1["name"] == "vnf1")
+        self.assertTrue(s1["state"]["Running"])
+        self.assertTrue(s1["network"][0]['intf_name'] == 'intf1')
+        self.assertTrue(s1["network"][0]['ip'] == '10.0.10.1')
+
+        s2 = self.dc[1].containers.get("vnf2").getStatus()
+        self.assertTrue(s2["name"] == "vnf2")
+        self.assertTrue(s2["state"]["Running"])
+        self.assertTrue(s2["network"][0]['intf_name'] == 'intf2')
+        self.assertTrue(s2["network"][0]['ip'] == '10.0.10.2')
+
+        # setup links
+        self.net.setChain('vnf1', 'vnf2', 'intf1', 'intf2', bidirectional=True, cmd='add-flow')
+        # check connectivity by using ping
+        self.assertTrue(self.net.ping([vnf1, vnf2]) <= 0.0)
+        # stop Mininet network
+        self.stopNet()
+
+#@unittest.skip("disabled compute tests for development")
+class testEmulatorCompute( SimpleTestTopology ):
+    """
+    Tests to check the emulator's API to add and remove
+    compute resources at runtime.
+    """
+
+    def testAddSingleComputeSingleDC(self):
+        """
+        Adds a single compute instance to
+        a single DC and checks its connectivity with a
+        manually added host.
+        """
+        # create network
+        self.createNet(nswitches=0, ndatacenter=1, nhosts=1, ndockers=0)
+        # setup links
+        self.net.addLink(self.dc[0], self.h[0])
+        # start Mininet network
+        self.startNet()
+        # add compute resources
+        vnf1 = self.dc[0].startCompute("vnf1")
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 1)
+        self.assertTrue(len(self.net.hosts) == 2)
+        self.assertTrue(len(self.net.switches) == 1)
+        # check compute list result
+        self.assertTrue(len(self.dc[0].listCompute()) == 1)
+        self.assertTrue(isinstance(self.dc[0].listCompute()[0], EmulatorCompute))
+        self.assertTrue(self.dc[0].listCompute()[0].name == "vnf1")
+        # check connectivity by using ping
+        self.assertTrue(self.net.ping([self.h[0], vnf1]) <= 0.0)
+        # stop Mininet network
+        self.stopNet()
+
+    def testRemoveSingleComputeSingleDC(self):
+        """
+        Test stop method for compute instances.
+        Check that the instance is really removed.
+        """
+        # create network
+        self.createNet(nswitches=0, ndatacenter=1, nhosts=1, ndockers=0)
+        # setup links
+        self.net.addLink(self.dc[0], self.h[0])
+        # start Mininet network
+        self.startNet()
+        # add compute resources
+        vnf1 = self.dc[0].startCompute("vnf1")
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 1)
+        self.assertTrue(len(self.net.hosts) == 2)
+        self.assertTrue(len(self.net.switches) == 1)
+        # check compute list result
+        self.assertTrue(len(self.dc[0].listCompute()) == 1)
+        # check connectivity by using ping
+        self.assertTrue(self.net.ping([self.h[0], vnf1]) <= 0.0)
+        # remove compute resources
+        self.dc[0].stopCompute("vnf1")
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 0)
+        self.assertTrue(len(self.net.hosts) == 1)
+        self.assertTrue(len(self.net.switches) == 1)
+        # check compute list result
+        self.assertTrue(len(self.dc[0].listCompute()) == 0)
+        # stop Mininet network
+        self.stopNet()
+
+    def testGetStatusSingleComputeSingleDC(self):
+        """
+        Check if the getStatus functionality of EmulatorCompute
+        objects works well.
+        """
+        # create network
+        self.createNet(nswitches=0, ndatacenter=1, nhosts=1, ndockers=0)
+        # setup links
+        self.net.addLink(self.dc[0], self.h[0])
+        # start Mininet network
+        self.startNet()
+        # add compute resources
+        vnf1 = self.dc[0].startCompute("vnf1")
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 1)
+        self.assertTrue(len(self.net.hosts) == 2)
+        self.assertTrue(len(self.net.switches) == 1)
+        # check compute list result
+        self.assertTrue(len(self.dc[0].listCompute()) == 1)
+        self.assertTrue(isinstance(self.dc[0].listCompute()[0], EmulatorCompute))
+        self.assertTrue(self.dc[0].listCompute()[0].name == "vnf1")
+        # check connectivity by using ping
+        self.assertTrue(self.net.ping([self.h[0], vnf1]) <= 0.0)
+        # check get status
+        s = self.dc[0].containers.get("vnf1").getStatus()
+        self.assertTrue(s["name"] == "vnf1")
+        self.assertTrue(s["state"]["Running"])
+        # stop Mininet network
+        self.stopNet()
+
+    def testConnectivityMultiDC(self):
+        """
+        Test if compute instances started in different data centers
+        are able to talk to each other.
+        """
+        # create network
+        self.createNet(
+            nswitches=3, ndatacenter=2, nhosts=0, ndockers=0,
+            autolinkswitches=True)
+        # setup links
+        self.net.addLink(self.dc[0], self.s[0])
+        self.net.addLink(self.dc[1], self.s[2])
+        # start Mininet network
+        self.startNet()
+        # add compute resources
+        vnf1 = self.dc[0].startCompute("vnf1")
+        vnf2 = self.dc[1].startCompute("vnf2")
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 2)
+        self.assertTrue(len(self.net.hosts) == 2)
+        self.assertTrue(len(self.net.switches) == 5)
+        # check compute list result
+        self.assertTrue(len(self.dc[0].listCompute()) == 1)
+        self.assertTrue(len(self.dc[1].listCompute()) == 1)
+        # check connectivity by using ping
+        self.assertTrue(self.net.ping([vnf1, vnf2]) <= 0.0)
+        # stop Mininet network
+        self.stopNet()
+
+    def testInterleavedAddRemoveMultiDC(self):
+        """
+        Test multiple, interleaved add and remove operations and ensure
+        that always all expected compute instances are reachable.
+        """
+                # create network
+        self.createNet(
+            nswitches=3, ndatacenter=2, nhosts=0, ndockers=0,
+            autolinkswitches=True)
+        # setup links
+        self.net.addLink(self.dc[0], self.s[0])
+        self.net.addLink(self.dc[1], self.s[2])
+        # start Mininet network
+        self.startNet()
+        # add compute resources
+        vnf1 = self.dc[0].startCompute("vnf1")
+        vnf2 = self.dc[1].startCompute("vnf2")
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 2)
+        self.assertTrue(len(self.net.hosts) == 2)
+        self.assertTrue(len(self.net.switches) == 5)
+        # check compute list result
+        self.assertTrue(len(self.dc[0].listCompute()) == 1)
+        self.assertTrue(len(self.dc[1].listCompute()) == 1)
+        # check connectivity by using ping
+        self.assertTrue(self.net.ping([vnf1, vnf2]) <= 0.0)
+        # remove compute resources
+        self.dc[0].stopCompute("vnf1")
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 1)
+        self.assertTrue(len(self.net.hosts) == 1)
+        self.assertTrue(len(self.net.switches) == 5)
+        # check compute list result
+        self.assertTrue(len(self.dc[0].listCompute()) == 0)
+        self.assertTrue(len(self.dc[1].listCompute()) == 1)
+        # add compute resources
+        vnf3 = self.dc[0].startCompute("vnf3")
+        vnf4 = self.dc[0].startCompute("vnf4")
+        # check compute list result
+        self.assertTrue(len(self.dc[0].listCompute()) == 2)
+        self.assertTrue(len(self.dc[1].listCompute()) == 1)
+        self.assertTrue(self.net.ping([vnf3, vnf2]) <= 0.0)
+        self.assertTrue(self.net.ping([vnf4, vnf2]) <= 0.0)
+        # remove compute resources
+        self.dc[0].stopCompute("vnf3")
+        self.dc[0].stopCompute("vnf4")
+        self.dc[1].stopCompute("vnf2")
+        # check compute list result
+        self.assertTrue(len(self.dc[0].listCompute()) == 0)
+        self.assertTrue(len(self.dc[1].listCompute()) == 0)
+        # stop Mininet network
+        self.stopNet()
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/src/emuvim/test/unittests/test_resourcemodel.py b/src/emuvim/test/unittests/test_resourcemodel.py
new file mode 100644 (file)
index 0000000..165893d
--- /dev/null
@@ -0,0 +1,338 @@
+import time
+import os
+import unittest
+from emuvim.test.base import SimpleTestTopology
+from emuvim.dcemulator.resourcemodel import BaseResourceModel, ResourceFlavor, NotEnoughResourcesAvailable, ResourceModelRegistrar
+from emuvim.dcemulator.resourcemodel.upb.simple import UpbSimpleCloudDcRM, UpbOverprovisioningCloudDcRM, UpbDummyRM
+
+
+
+class testResourceModel(SimpleTestTopology):
+    """
+    Test the general resource model API and functionality.
+    """
+
+    def testBaseResourceModelApi(self):
+        """
+        Tast bare API without real resource madel.
+        :return:
+        """
+        r = BaseResourceModel()
+        # check if default flavors are there
+        self.assertTrue(len(r._flavors) == 5)
+        # check addFlavor functionality
+        f = ResourceFlavor("test", {"testmetric": 42})
+        r.addFlavour(f)
+        self.assertTrue("test" in r._flavors)
+        self.assertTrue(r._flavors.get("test").get("testmetric") == 42)
+
+    def testAddRmToDc(self):
+        """
+        Test is allocate/free is called when a RM is added to a DC.
+        :return:
+        """
+        # create network
+        self.createNet(nswitches=0, ndatacenter=1, nhosts=2, ndockers=0)
+        # setup links
+        self.net.addLink(self.dc[0], self.h[0])
+        self.net.addLink(self.h[1], self.dc[0])
+        # add resource model
+        r = BaseResourceModel()
+        self.dc[0].assignResourceModel(r)
+        # start Mininet network
+        self.startNet()
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 0)
+        self.assertTrue(len(self.net.hosts) == 2)
+        self.assertTrue(len(self.net.switches) == 1)
+        # check resource model and resource model registrar
+        self.assertTrue(self.dc[0]._resource_model is not None)
+        self.assertTrue(len(self.net.rm_registrar.resource_models) == 1)
+
+        # check if alloc was called during startCompute
+        self.assertTrue(len(r._allocated_compute_instances) == 0)
+        self.dc[0].startCompute("tc1")
+        time.sleep(1)
+        self.assertTrue(len(r._allocated_compute_instances) == 1)
+        # check if free was called during stopCompute
+        self.dc[0].stopCompute("tc1")
+        self.assertTrue(len(r._allocated_compute_instances) == 0)
+        # check connectivity by using ping
+        self.assertTrue(self.net.ping([self.h[0], self.h[1]]) <= 0.0)
+        # stop Mininet network
+        self.stopNet()
+
+
+def createDummyContainerObject(name, flavor):
+
+    class DummyContainer(object):
+
+        def __init__(self):
+            self.cpu_period = -1
+            self.cpu_quota = -1
+            self.mem_limit = -1
+            self.memswap_limit = -1
+
+        def updateCpuLimit(self, cpu_period, cpu_quota):
+            self.cpu_period = cpu_period
+            self.cpu_quota = cpu_quota
+
+        def updateMemoryLimit(self, mem_limit):
+            self.mem_limit = mem_limit
+
+    d = DummyContainer()
+    d.name = name
+    d.flavor_name = flavor
+    return d
+
+
+
+
+class testUpbSimpleCloudDcRM(SimpleTestTopology):
+    """
+    Test the UpbSimpleCloudDc resource model.
+    """
+
+    def testAllocationComputations(self):
+        """
+        Test the allocation procedures and correct calculations.
+        :return:
+        """
+        # config
+        E_CPU = 1.0
+        MAX_CU = 100
+        E_MEM = 512
+        MAX_MU = 2048
+        # create dummy resource model environment
+        reg = ResourceModelRegistrar(dc_emulation_max_cpu=E_CPU, dc_emulation_max_mem=E_MEM)
+        rm = UpbSimpleCloudDcRM(max_cu=MAX_CU, max_mu=MAX_MU)
+        reg.register("test_dc", rm)
+
+        c1 = createDummyContainerObject("c1", flavor="tiny")
+        rm.allocate(c1)  # calculate allocation
+        self.assertEqual(float(c1.cpu_quota) / c1.cpu_period, E_CPU / MAX_CU * 0.5)   # validate compute result
+        self.assertEqual(float(c1.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 32)   # validate memory result
+
+        c2 = createDummyContainerObject("c2", flavor="small")
+        rm.allocate(c2)  # calculate allocation
+        self.assertEqual(float(c2.cpu_quota) / c2.cpu_period, E_CPU / MAX_CU * 1)   # validate compute result
+        self.assertEqual(float(c2.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 128)   # validate memory result
+
+        c3 = createDummyContainerObject("c3", flavor="medium")
+        rm.allocate(c3)  # calculate allocation
+        self.assertEqual(float(c3.cpu_quota) / c3.cpu_period, E_CPU / MAX_CU * 4)   # validate compute result
+        self.assertEqual(float(c3.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 256)   # validate memory result
+
+        c4 = createDummyContainerObject("c4", flavor="large")
+        rm.allocate(c4)  # calculate allocation
+        self.assertEqual(float(c4.cpu_quota) / c4.cpu_period, E_CPU / MAX_CU * 8)   # validate compute result
+        self.assertEqual(float(c4.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 512)   # validate memory result
+
+        c5 = createDummyContainerObject("c5", flavor="xlarge")
+        rm.allocate(c5)  # calculate allocation
+        self.assertEqual(float(c5.cpu_quota) / c5.cpu_period, E_CPU / MAX_CU * 16)   # validate compute result
+        self.assertEqual(float(c5.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 1024)   # validate memory result
+
+
+    def testAllocationCpuLimit(self):
+        """
+        Test CPU allocation limit
+        :return:
+        """
+        # config
+        E_CPU = 1.0
+        MAX_CU = 40
+        E_MEM = 512
+        MAX_MU = 4096
+        # create dummy resource model environment
+        reg = ResourceModelRegistrar(dc_emulation_max_cpu=E_CPU, dc_emulation_max_mem=E_MEM)
+        rm = UpbSimpleCloudDcRM(max_cu=MAX_CU, max_mu=MAX_MU)
+        reg.register("test_dc", rm)
+
+        # test over provisioning exeption
+        exception = False
+        try:
+            c6 = createDummyContainerObject("c6", flavor="xlarge")
+            c7 = createDummyContainerObject("c7", flavor="xlarge")
+            c8 = createDummyContainerObject("c8", flavor="xlarge")
+            c9 = createDummyContainerObject("c9", flavor="xlarge")
+            rm.allocate(c6)  # calculate allocation
+            rm.allocate(c7)  # calculate allocation
+            rm.allocate(c8)  # calculate allocation
+            rm.allocate(c9)  # calculate allocation
+        except NotEnoughResourcesAvailable as e:
+            self.assertIn("Not enough compute", e.message)
+            exception = True
+        self.assertTrue(exception)
+
+    def testAllocationMemLimit(self):
+        """
+        Test MEM allocation limit
+        :return:
+        """
+        # config
+        E_CPU = 1.0
+        MAX_CU = 500
+        E_MEM = 512
+        MAX_MU = 2048
+        # create dummy resource model environment
+        reg = ResourceModelRegistrar(dc_emulation_max_cpu=E_CPU, dc_emulation_max_mem=E_MEM)
+        rm = UpbSimpleCloudDcRM(max_cu=MAX_CU, max_mu=MAX_MU)
+        reg.register("test_dc", rm)
+
+        # test over provisioning exeption
+        exception = False
+        try:
+            c6 = createDummyContainerObject("c6", flavor="xlarge")
+            c7 = createDummyContainerObject("c7", flavor="xlarge")
+            c8 = createDummyContainerObject("c8", flavor="xlarge")
+            rm.allocate(c6)  # calculate allocation
+            rm.allocate(c7)  # calculate allocation
+            rm.allocate(c8)  # calculate allocation
+        except NotEnoughResourcesAvailable as e:
+            self.assertIn("Not enough memory", e.message)
+            exception = True
+        self.assertTrue(exception)
+
+    def testFree(self):
+        """
+        Test the free procedure.
+        :return:
+        """
+        # config
+        E_CPU = 1.0
+        MAX_CU = 100
+        # create dummy resource model environment
+        reg = ResourceModelRegistrar(dc_emulation_max_cpu=1.0, dc_emulation_max_mem=512)
+        rm = UpbSimpleCloudDcRM(max_cu=100, max_mu=100)
+        reg.register("test_dc", rm)
+        c1 = createDummyContainerObject("c6", flavor="tiny")
+        rm.allocate(c1)  # calculate allocation
+        self.assertTrue(rm.dc_alloc_cu == 0.5)
+        rm.free(c1)
+        self.assertTrue(rm.dc_alloc_cu == 0)
+
+    @unittest.skipIf(os.environ.get("SON_EMU_IN_DOCKER") is not None,
+                     "skipping test when running inside Docker container")
+    def testInRealTopo(self):
+        """
+        Start a real container and check if limitations are really passed down to Conteinernet.
+        :return:
+        """
+        # create network
+        self.createNet(nswitches=0, ndatacenter=1, nhosts=2, ndockers=0)
+        # setup links
+        self.net.addLink(self.dc[0], self.h[0])
+        self.net.addLink(self.h[1], self.dc[0])
+        # add resource model
+        r = UpbSimpleCloudDcRM(max_cu=100, max_mu=100)
+        self.dc[0].assignResourceModel(r)
+        # start Mininet network
+        self.startNet()
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 0)
+        self.assertTrue(len(self.net.hosts) == 2)
+        self.assertTrue(len(self.net.switches) == 1)
+        # check resource model and resource model registrar
+        self.assertTrue(self.dc[0]._resource_model is not None)
+        self.assertTrue(len(self.net.rm_registrar.resource_models) == 1)
+
+        # check if alloc was called during startCompute
+        self.assertTrue(len(r._allocated_compute_instances) == 0)
+        tc1 = self.dc[0].startCompute("tc1", flavor_name="tiny")
+        time.sleep(1)
+        self.assertTrue(len(r._allocated_compute_instances) == 1)
+
+        # check if there is a real limitation set for containers cgroup
+        # deactivated for now, seems not to work in docker-in-docker setup used in CI
+        self.assertEqual(float(tc1.cpu_quota)/tc1.cpu_period, 0.005)
+
+        # check if free was called during stopCompute
+        self.dc[0].stopCompute("tc1")
+        self.assertTrue(len(r._allocated_compute_instances) == 0)
+        # check connectivity by using ping
+        self.assertTrue(self.net.ping([self.h[0], self.h[1]]) <= 0.0)
+        # stop Mininet network
+        self.stopNet()
+
+
+class testUpbOverprovisioningCloudDcRM(SimpleTestTopology):
+    """
+    Test the UpbOverprovisioningCloudDc resource model.
+    """
+
+    def testAllocationComputations(self):
+        """
+        Test the allocation procedures and correct calculations.
+        :return:
+        """
+        # config
+        E_CPU = 1.0
+        MAX_CU = 3
+        E_MEM = 512
+        MAX_MU = 2048
+        # create dummy resource model environment
+        reg = ResourceModelRegistrar(dc_emulation_max_cpu=E_CPU, dc_emulation_max_mem=E_MEM)
+        rm = UpbOverprovisioningCloudDcRM(max_cu=MAX_CU, max_mu=MAX_MU)
+        reg.register("test_dc", rm)
+
+        c1 = createDummyContainerObject("c1", flavor="small")
+        rm.allocate(c1)  # calculate allocation
+        self.assertAlmostEqual(float(c1.cpu_quota) / c1.cpu_period, E_CPU / MAX_CU * 1.0, places=5)
+        self.assertAlmostEqual(float(c1.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 128)
+        self.assertAlmostEqual(rm.cpu_op_factor, 1.0)
+
+        c2 = createDummyContainerObject("c2", flavor="small")
+        rm.allocate(c2)  # calculate allocation
+        self.assertAlmostEqual(float(c2.cpu_quota) / c2.cpu_period, E_CPU / MAX_CU * 1.0, places=5)
+        self.assertAlmostEqual(float(c2.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 128)
+        self.assertAlmostEqual(rm.cpu_op_factor, 1.0)
+
+        c3 = createDummyContainerObject("c3", flavor="small")
+        rm.allocate(c3)  # calculate allocation
+        self.assertAlmostEqual(float(c3.cpu_quota) / c3.cpu_period, E_CPU / MAX_CU * 1.0, places=5)
+        self.assertAlmostEqual(float(c3.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 128)
+        self.assertAlmostEqual(rm.cpu_op_factor, 1.0)
+
+        # from this container onwards, we should go to over provisioning mode:
+        c4 = createDummyContainerObject("c4", flavor="small")
+        rm.allocate(c4)  # calculate allocation
+        self.assertAlmostEqual(float(c4.cpu_quota) / c4.cpu_period, E_CPU / MAX_CU * (float(3) / 4), places=5)
+        self.assertAlmostEqual(float(c4.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 128, places=5)
+        self.assertAlmostEqual(rm.cpu_op_factor, 0.75)
+
+        c5 = createDummyContainerObject("c5", flavor="small")
+        rm.allocate(c5)  # calculate allocation
+        self.assertAlmostEqual(float(c5.cpu_quota) / c5.cpu_period, E_CPU / MAX_CU * (float(3) / 5), places=5)
+        self.assertAlmostEqual(float(c5.mem_limit/1024/1024), float(E_MEM) / MAX_MU * 128)
+        self.assertAlmostEqual(rm.cpu_op_factor, 0.6)
+
+
+class testUpbDummyRM(SimpleTestTopology):
+    """
+    Test the UpbDummyRM resource model.
+    """
+
+    def testAllocationComputations(self):
+        """
+        Test the allocation procedures and correct calculations.
+        :return:
+        """
+        # config
+        E_CPU = 1.0
+        MAX_CU = 3
+        E_MEM = 512
+        MAX_MU = 2048
+        # create dummy resource model environment
+        reg = ResourceModelRegistrar(dc_emulation_max_cpu=E_CPU, dc_emulation_max_mem=E_MEM)
+        rm = UpbDummyRM(max_cu=MAX_CU, max_mu=MAX_MU)
+        reg.register("test_dc", rm)
+
+        c1 = createDummyContainerObject("c1", flavor="small")
+        rm.allocate(c1)  # calculate allocation
+        self.assertEqual(len(rm._allocated_compute_instances), 1)
+
+        c2 = createDummyContainerObject("c2", flavor="small")
+        rm.allocate(c2)  # calculate allocation
+        self.assertEqual(len(rm._allocated_compute_instances), 2)
+
diff --git a/src/emuvim/test/unittests/test_sonata_dummy_gatekeeper.py b/src/emuvim/test/unittests/test_sonata_dummy_gatekeeper.py
new file mode 100644 (file)
index 0000000..250bae7
--- /dev/null
@@ -0,0 +1,65 @@
+import time
+import requests
+import subprocess
+import os
+import unittest
+from emuvim.test.base import SimpleTestTopology
+from emuvim.api.sonata import SonataDummyGatekeeperEndpoint
+
+PACKAGE_PATH = "misc/sonata-demo-docker.son"
+
+class testSonataDummyGatekeeper(SimpleTestTopology):
+
+    @unittest.skipIf(os.environ.get("SON_EMU_IN_DOCKER") is None or True,
+                     "skipping dummy GK test in local environment")
+    def testAPI(self):
+        # create network
+        self.createNet(nswitches=0, ndatacenter=2, nhosts=2, ndockers=0)
+        # setup links
+        self.net.addLink(self.dc[0], self.h[0])
+        self.net.addLink(self.dc[0], self.dc[1])
+        self.net.addLink(self.h[1], self.dc[1])
+        # connect dummy GK to data centers
+        sdkg1 = SonataDummyGatekeeperEndpoint("0.0.0.0", 5000)
+        sdkg1.connectDatacenter(self.dc[0])
+        sdkg1.connectDatacenter(self.dc[1])
+        # run the dummy gatekeeper (in another thread, don't block)
+        sdkg1.start()
+        # start Mininet network
+        self.startNet()
+        time.sleep(1)
+
+        print "starting tests"
+        # board package
+        files = {"package": open(PACKAGE_PATH, "rb")}
+        r = requests.post("http://127.0.0.1:5000/packages", files=files)
+        self.assertEqual(r.status_code, 200)
+        self.assertTrue(r.json().get("service_uuid") is not None)
+
+        # instantiate service
+        service_uuid = r.json().get("service_uuid")
+        r2 = requests.post("http://127.0.0.1:5000/instantiations", json={"service_uuid": service_uuid})
+        self.assertEqual(r2.status_code, 200)
+
+        # give the emulator some time to instantiate everything
+        time.sleep(2)
+
+        # check get request APIs
+        r3 = requests.get("http://127.0.0.1:5000/packages")
+        self.assertEqual(len(r3.json().get("service_uuid_list")), 1)
+        r4 = requests.get("http://127.0.0.1:5000/instantiations")
+        self.assertEqual(len(r4.json().get("service_instance_list")), 1)
+
+        # check number of running nodes
+        self.assertTrue(len(self.getContainernetContainers()) == 3)
+        self.assertTrue(len(self.net.hosts) == 5)
+        self.assertTrue(len(self.net.switches) == 2)
+        # check compute list result
+        self.assertTrue(len(self.dc[0].listCompute()) == 3)
+        # check connectivity by using ping
+        for vnf in self.dc[0].listCompute():
+            self.assertTrue(self.net.ping([self.h[0], vnf]) <= 0.0)
+        # stop Mininet network
+        self.stopNet()
+
+
index ab6ece3..5544503 100755 (executable)
@@ -10,4 +10,4 @@ cd ${BASE_DIR}
 rm -rf utils/ci/junit-xml/*
 
 # Launch the unit testing on emuvim
-py.test -v --junit-xml=utils/ci/junit-xml/pytest_emuvim.xml src/emuvim
+py.test -v --junit-xml=utils/ci/junit-xml/pytest_emuvim.xml src/emuvim/test/unittests
index 2c1bda6..18e8bfe 100644 (file)
@@ -1,21 +1,20 @@
-FROM cgeoffroy/dockernet
+FROM mpeuster/containernet
+MAINTAINER manuel@peuster.de
 
 ENV SON_EMU_IN_DOCKER 1
 
-# ensure that we have the latest dockernet code base!
-WORKDIR /
-RUN rm -rf dockernet
-RUN git clone -b dockernet-sonata https://github.com/mpeuster/dockernet.git
-WORKDIR /dockernet
-RUN python setup.py develop
-
 WORKDIR /son-emu
 COPY . /son-emu/
 
 RUN cd /son-emu/ansible \
     && ansible-playbook install.yml \
     && cd /son-emu \
+    # we need to reset the __pycache__ for correct test discovery
+    && rm -rf src/emuvim/test/__pycache__ \
     && python setup.py install \
     && echo 'Done'
 
 ENTRYPOINT ["/son-emu/utils/docker/entrypoint.sh"]
+
+# dummy GK, zerorpc
+EXPOSE 5000 4242
index 580762f..7e72914 100755 (executable)
@@ -1,7 +1,7 @@
 #! /bin/bash -e
 set -x
 
-#cp /dockernet/util/docker/entrypoint.sh /tmp/x.sh
-#cat /tmp/x.sh | awk 'NR==1{print; print "set -x"} NR!=1' > /dockernet/util/docker/entrypoint.sh
+#cp /containernet/util/docker/entrypoint.sh /tmp/x.sh
+#cat /tmp/x.sh | awk 'NR==1{print; print "set -x"} NR!=1' > /conteinernet/util/docker/entrypoint.sh
 
-exec /dockernet/util/docker/entrypoint.sh $*
+exec /containernet/util/docker/entrypoint.sh $*